Image: Shutterstock

Hackers Hijacked ASUS Software Updates to Install Backdoors on Thousands of Computers

The Taiwan-based tech giant ASUS is believed to have pushed the malware to hundreds of thousands of customers through its trusted automatic software update tool after attackers compromised the company’s server and used it to push the malware to machines.

|
Mar 25 2019, 1:00pm

Image: Shutterstock

Researchers at cybersecurity firm Kaspersky Lab say that ASUS, one of the world’s largest computer makers, was used to unwittingly install a malicious backdoor on thousands of its customers’ computers last year after attackers compromised a server for the company’s live software update tool. The malicious file was signed with legitimate ASUS digital certificates to make it appear to be an authentic software update from the company, Kaspersky Lab says.

ASUS, a multi-billion dollar computer hardware company based in Taiwan that manufactures desktop computers, laptops, mobile phones, smart home systems, and other electronics, was pushing the backdoor to customers for at least five months last year before it was discovered, according to new research from the Moscow-based security firm.

The researchers estimate half a million Windows machines received the malicious backdoor through the ASUS update server, although the attackers appear to have been targeting only about 600 of those systems. The malware searched for targeted systems through their unique MAC addresses. Once on a system, if it found one of these targeted addresses, the malware reached out to a command-and-control server the attackers operated, which then installed additional malware on those machines.

Kaspersky Lab said it uncovered the attack in January after adding a new supply-chain detection technology to its scanning tool to catch anomalous code fragments hidden in legitimate code or catch code that is hijacking normal operations on a machine. The company plans to release a full technical paper and presentation about the ASUS attack, which it has dubbed ShadowHammer, next month at its Security Analyst Summit in Singapore. In the meantime, Kaspersky has published some of the technical details on its website.

“We saw the updates come down from the Live Update ASUS server. They were trojanized, or malicious updates, and they were signed by ASUS."

The issue highlights the growing threat from so-called supply-chain attacks, where malicious software or components get installed on systems as they’re manufactured or assembled, or afterward via trusted vendor channels. Last year the US launched a supply chain task force to examine the issue after a number of supply-chain attacks were uncovered in recent years. Although most attention on supply-chain attacks focuses on the potential for malicious implants to be added to hardware or software during manufacturing, vendor software updates are an ideal way for attackers to deliver malware to systems after they’re sold, because customers trust vendor updates, especially if they’re signed with a vendor’s legitimate digital certificate.

“This attack shows that the trust model we are using based on known vendor names and validation of digital signatures cannot guarantee that you are safe from malware,” said Vitaly Kamluk, Asia-Pacific director of Kaspersky Lab’s Global Research and Analysis Team who led the research. He noted that ASUS denied to Kaspersky that its server was compromised and that the malware came from its network when the researchers contacted the company in January. But the download path for the malware samples Kaspersky collected leads directly back to the ASUS server, Kamluk said.

Motherboard sent ASUS a list of the claims made by Kaspersky in three separate emails on Thursday but has not heard back from the company.

Read more: What Is a 'Supply Chain Attack?'

But the US-based security firm Symantec confirmed the Kaspersky findings on Friday after being asked by Motherboard to see if any of its customers also received the malicious download. The company is still investigating the matter but said in a phone call that at least 13,000 computers belonging to Symantec customers were infected with the malicious software update from ASUS last year.

“We saw the updates come down from the Live Update ASUS server. They were trojanized, or malicious updates, and they were signed by ASUS,” said Liam O’Murchu, director of development for the Security Technology and Response group at Symantec.

This is not the first time attackers have used trusted software updates to infect systems. The infamous Flame spy tool, developed by some of the same attackers behind Stuxnet, was the first known attack to trick users in this way by hijacking the Microsoft Windows updating tool on machines to infect computers. Flame, discovered in 2012, was signed with an unauthorized Microsoft certificate that attackers tricked Microsoft’s system into issuing to them. The attackers in that case did not actually compromise Microsoft’s update server to deliver Flame. Instead, they were able to redirect the software update tool on the machines of targeted customers so that they contacted a malicious server the attackers controlled instead of the legitimate Microsoft update server.

Two different attacks discovered in 2017 also compromised trusted software updates. One involved the computer security cleanup tool known as CCleaner that was delivering malware to customers via a software update. More than 2 million customers received that malicious update before it was discovered. The other incident involved the infamous notPetya attack that began in Ukraine and infected machines via a malicious update to an accounting software package.

Costin Raiu, company-wide director of Kaspersky’s Global Research and Analysis Team, said the ASUS attack is different from these others. “I’d say this attack stands out from previous ones while being one level up in complexity and stealthiness. The filtering of targets in a surgical manner by their MAC addresses is one of the reasons it stayed undetected for so long. If you are not a target, the malware is virtually silent,” he told Motherboard.

But even if silent on non-targeted systems, the malware still gave the attackers a backdoor into every infected ASUS system.

Tony Sager, senior vice president at the Center for Internet Security who did defensive vulnerability analysis for the NSA for years, said the method the attackers chose to target specific computers is odd.

“Supply chain attacks are in the ‘big deal’ category and are a sign of someone who is careful about this and has done some planning,” he told Motherboard in a phone call. “But putting something out that hits tens of thousands of targets when you’re really going only after a few is really going after something with a hammer.”

Kaspersky researchers first detected the malware on a customer’s machine on January 29. After they created a signature to find the malicious update file on other customer systems, they discovered that more than 57,000 Kaspersky customers had been infected with it. That victim toll only accounts for Kaspersky customers, however. Kamluk said the real number is likely in the hundreds of thousands.

Most of the infected machines belonging to Kaspersky customers (about 18 percent) were in Russia, followed by fewer numbers in Germany and France. Only about 5 percent of infected Kaspersky customers were in the United States. Symantec’s O’Murchu said that about 15 percent of the 13,000 machines belonging to his company’s infected customers were in the U.S.

Kamluk said Kaspersky notified ASUS of the problem on January 31, and a Kaspersky employee met with ASUS in person on February 14. But he said the company has been largely unresponsive since then and has not notified ASUS customers about the issue.

The attackers used two different ASUS digital certificates to sign their malware. The first expired in mid-2018, so the attackers then switched to a second legitimate ASUS certificate to sign their malware after this.

Kamluk said ASUS continued to use one of the compromised certificates to sign its own files for at least a month after Kaspersky notified the company of the problem, though it has since stopped. But Kamluk said ASUS has still not invalidated the two compromised certificates, which means the attackers or anyone else with access to the un-expired certificate could still sign malicious files with it, and machines would view those files as legitimate ASUS files.

This wouldn't be the first time ASUS was accused of compromising the security of its customers. In 2016, the company was charged by the Federal Trade Commission with misrepresentation and unfair security practices over multiple vulnerabilities in its routers, cloud back-up storage and firmware update tool that would have allowed attackers to gain access to customer files and router log-in credentials, among other things. The FTC claimed ASUS knew about those vulnerabilities for at least a year before fixing them and notifying customers, putting nearly a million US router owners at risk of attack. ASUS settled the case by agreeing to establish and maintain a comprehensive security program that would be subject to independent audit for 20 years.

The ASUS live update tool that delivered malware to customers last year is installed at the factory on ASUS laptops and other devices. When users enable it, the tool contacts the ASUS update server periodically to see if any firmware or other software updates are available.

“They wanted to get into very specific targets and they already knew in advance their network card MAC address, which is quite interesting.”

The malicious file pushed to customer machines through the tool was called setup.exe, and purported to be an update to the update tool itself. It was actually a three-year-old ASUS update file from 2015 that the attackers injected with malicious code before signing it with a legitimate ASUS certificate. The attackers appear to have pushed it out to users between June and November 2018, according to Kaspersky Lab. Kamluk said the use of an old binary with a current certificate suggests the attackers had access to the server where ASUS signs its files but not the actual build server where it compiles new ones. Because the attackers used the same ASUS binary each time, it suggests they didn’t have access to the whole ASUS infrastructure, just part of the signing infrastructure, Kamluk notes. Legitimate ASUS software updates still got pushed to customers during the period the malware was being pushed out, but these legitimate updates were signed with a different certificate that used enhanced validation protection, Kamluk said, making it more difficult to spoof.

The Kaspersky researchers collected more than 200 samples of the malicious file from customer machines, which is how they discovered the attack was multi-staged and targeted.

Buried in those malicious samples were hard-coded MD5 hash values that turned out to be unique MAC addresses for network adapter cards. MD5 is an algorithm that creates a cryptographic representation or value for data that is run through the algorithm. Every network card has a unique ID or address assigned by the manufacturer of the card, and the attackers created a hash of each MAC address it was seeking before hard-coding those hashes into their malicious file, to make it more difficult to see what the malware was doing. The malware had 600 unique MAC addresses it was seeking, though the actual number of targeted customers may be larger than this. Kaspersky can only see the MAC addresses that were hard-coded into the particular malware samples found on its customers’ machines.

1553292749933-shutterstock_1181403586
Image: Shutterstock

The Kaspersky researchers were able to crack most of the hashes they found to determine the MAC addresses, which helped them identify what network cards the victims had installed on their machines, but not the victims themselves. Any time the malware infected a machine, it collected the MAC address from that machine’s network card, hashed it, and compared that hash against the ones hard-coded in the malware. If it found a match to any of the 600 targeted addresses, the malware reached out to asushotfix.com, a site masquerading as a legitimate ASUS site, to fetch a second-stage backdoor that it downloaded to that system. Because only a small number of machines contacted the command-and-control server, this helped the malware stay under the radar.

“They were not trying to target as many users as possible,” said Kamluk. “They wanted to get into very specific targets and they already knew in advance their network card MAC address, which is quite interesting.”

Symantec’s O’Murchu said he’s not sure yet if any of his company’s customers were among those whose MAC addresses were on the target list and received the second-stage backdoor.

The command-and-control server that delivered the second-stage backdoor was registered May 3 last year but was shut down in November before Kaspersky discovered the attack. Because of this, the researchers were unable to obtain a copy of the second-stage backdoor pushed out to victims or identify victim machines that had contacted that server. Kaspersky believes at least one of its customers in Russia got infected with the second-stage backdoor when his machine contacted the command-and-control server on October 29 last year, but Raiu says the company doesn’t know the identity of the machine’s owner in order to contact him and investigate further.

There were early hints that a signed and malicious ASUS update was being pushed to users in June 2018, when a number of people posted comments in a Reddit forum about a suspicious ASUS alert that popped up on their machines for a “critical” update. “ASUS strongly recommends that you install these updates now,” the alert warned.

In a post titled “ASUSFourceUpdater.exe is trying to do some mystery update, but it won't say what,” a user named GreyWolfx wrote, “I got an update popup from a .exe that I had never seen before today….I’m just curious if anyone knows what this update would possibly be for?”

When he and other users clicked on their ASUS updater tool to get information about the update, the tool showed no recent updates had been issued from ASUS. But because the file was digitally signed with an ASUS certificate and because scans of the file on the VirusTotal web site indicated it was not malicious, many accepted the update as legitimate and downloaded it to their machines. VirusTotal is a site that aggregates dozens of antivirus programs; users can upload suspicious files to the site to see if any of the tools detect it as malicious.

“I uploaded the executable [to VirusTotal] and it comes back as a validly signed file without issue,” one user wrote. “The spelling of 'force' and the empty details window are indeed odd, but I noticed odd grammar errors in other ASUS software installed on this system, so it's not a smoking gun by itself,” he noted.

Kamluk and Raiu said this may not be the first time the ShadowHammer attackers have struck. They said they found similarities between the ASUS attack and ones previously conducted by a group dubbed ShadowPad by Kaspersky. ShadowPad targeted a Korean company that makes enterprise software for administering servers; the same group was also linked to the CCleaner attack. Although millions of machines were infected with the malicious CCleaner software update, only a subset of these got targeted with a second stage backdoor, similar to the ASUS victims. Notably, ASUS systems themselves were on the targeted CCleaner list.

The Kaspersky researchers believe the ShadowHammer attackers were behind the ShadowPad and CCleaner attacks and obtained access to the ASUS servers through the latter attack.

“ASUS was one of the primary targets of the CCleaner attack,” Raiu said. “One of the possibilities we are taking into account is that’s how they intially got into the ASUS network and then later through persistence they managed to leverage the access … to launch the ASUS attack.”

Listen to CYBER, Motherboard’s new weekly podcast about hacking and cybersecurity.

 
#####EOF##### Twitter Told a Bunch of Users They May Be Targets of a 'State Sponsored Attack' - Motherboard

Twitter Told a Bunch of Users They May Be Targets of a 'State Sponsored Attack'

On Friday, a number of Twitter users were told their accounts were targeted in a “state-sponsored attack," but there is no clear connection between the targets.

|
Dec 12 2015, 1:26am

Image: Shutterstock

Twitter is letting some users know that their accounts may have been the targets of a state-sponsored attack.

The attack is currently being investigated by Twitter. In their notice to users, Twitter said that the attack only impacted usernames, IP address, email addresses, and phone numbers if a phone number was associated with the account. Twitter did not say which state was implicated—it could have been China, Russia, or even the US.

I spoke to a number of Twitter users who received the notice. A couple are engaged in activism and are connected to the Tor Project in some capacity. A few are located in Canada, and vaguely associated with the security community at large. However, I could not determine any common factors between all recipients. They all received the notice around the same time, between 5:15 and 5:16 PM EST.

Cassie, or @myriadmystic, who runs cryptoparties in Minnesota, forwarded us her notice. The email links out to the Tor Project website and to the Electronic Frontier Foundation's Surveillance Self Defense page.

While Google and Facebook have standing policies (with Google's starting in 2012, and Facebook's in October 2015) of sending out notices for suspected state-sponsored attacks, Twitter has never made a formal announcement for a similar policy. This is the first time the company has sent out notices to users thought to have been the target of state-sponsored hacking.

The first tweet about the notice to attract attention was from @coldhakca, which describes itself as "a nonprofit dedicated to furthering privacy, security and freedom of speech." The members of coldhak are located in Winnipeg, Canada.

I asked the group over email why they may have been targeted. They responded: "Colin Childs, one of the founding directors of coldhak, is a contractor for Tor Project and, as such, is a likely target for this type of attention. It could also be because of the Tor relays coldhak operates, or the coldkernel project that coldhak is currently developing."

Colin Childs also received a notice for his personal account.

Security researcher, activist, and writer Runa Sandvik was also a recipient. Sandvik, who used to work for the Tor Project and now trains journalists in privacy and security, guessed that the notice is related to her work. "I spend a lot of time talking about how to protect your information and digital security in general," she said.

But when she looked at other tweets from people who received the same notice, "it didn't seem like there was a really clear link," she said.

Furthermore, the notice was "not terribly helpful," Sandvik said, since it didn't give her any information about who it was or what had flagged Twitter's suspicions. She noted that she has two-factor authentication enabled, and had not seen any suspicious login attempts.

"Why would a government want to know more about me?"

Sandvik also criticized Twitter for recommending in its notice that she use Tor to protect herself, because the company doesn't always allow users to access the site through Tor. "In the past, users who use Tor to access their Twitter account, and who choose not to give Twitter their phone numbers, would sometimes find their accounts have been blocked," she said.

(Twitter has denied blocking Tor. In September, Twitter spokesperson Nu Wexler told Motherboard, "Twitter does not block Tor, and many Twitter users rely on the Tor network for the important privacy and security it provides. Occasionally, signups and logins may be asked to phone verify if they exhibit spam-like behavior. This is applicable to all IPs and not just Tor IPs.")

When asked for comment on Friday, Wexler pointed out that both Google and Facebook send out similar notices for suspected state-sponsored attacks. None of the people I spoke to reported receiving similar notices from other platforms in the past.

Overall, there are no clear links between users, but there are some patterns. So far, Motherboard has found 12 users who received notices at the same time. Motherboard spoke to seven of them.

There are a number of users targeted who are based in Canada and related to the security community. Toronto-based Noris Fabio received a notice, and suggested that it was because he had described himself as a security researcher in his Twitter bio.

Phil Schleihauf, a software developer in Kingston Ontario, Canada, also received a notice. But he was unsure as to whether he'd been personally targeted at all. "Twitter suggests in their message that it's possible that I wasn't the target, and that seems likely to me," he said to me in an email. "That said, while I don't personally work on security research, I'm somewhat engaged in that community and know/follow/interact with people who do, so maybe they were targeting broadly?"

Americans received notices as well. Cassie, an activist who runs cryptoparties in Minnesota, said in an email, "I suspect a technical activist is threatening to many in power. From perusing the others who received the same notices, it looks like a bunch of security/encryption/activist folks, which is quite fascinating, given the recent uptick in politicians wanting to ban encryption of varying sorts."

But some recipients weren't even loosely related to Canada or the security community.

One user who wished to stay anonymous was based in Australia and didn't have any links to the Tor Project or to the security community. "I don't even follow @SwiftOnSecurity," she said in an email, referring to a popular infosecurity-themed Twitter account.

When asked why she thought she might have been targeted, she was at a loss. "I'm left-wing, I retweet a few things about politics every day. Lately it's been a lot of stuff supporting christian & muslim solidarity in the fight against ISIS. But also Blacklivesmatter stuff, feminist stuff. But I'm not an activist at all," she said.

"It's all just very very strange to me," she added. "I think of myself as keeping a pretty low profile, I'm a big believer in the democratic process and non-violence, I'm hardly radical. Why would a government want to know more about me? I think it makes that government look pretty authoritarian if it can't even tolerate a mild lefty like myself to have my pro-democratic, non-violent, faith-in-our-common-humanity views."
 
#####EOF##### China's New App Encourages its Citizens to Find and Report People in Debt - VICE

China's New App Encourages its Citizens to Find and Report People in Debt

The app shows all "deadbeat debtors" within a 500 metre radius.

|
29 January 2019, 11:58pm

Image via MaxPixel, CC0 (L); YouTube/DailyNewsUSA

Earlier this month, Chinese authorities released an app that allows users to locate anyone within 500 metres with an unpaid fine. The mini-program is an extension of the popular messaging service WeChat, China Daily reports, and has been described by the Higher People’s Court of Hebei as a “a map of deadbeat debtors”: a way for everyday people to sniff out those who are neglecting to pay their debts, and report them to the relevant authorities.

Deadbeat debtors are derogatorily known as laolai in China, and are typically treated with disdain, as described by the Independent. The idea of this app is to essentially crowd-source a crackdown on insolvent borrowers by allowing users to find laolai in their area via an on-screen radar. The radar covers a 500 metre radius around the user, and changes colour depending on the concentration of laolai within that sphere: red for most concentrated, then orange, then yellow, and then blue. Tapping on the culprits reveals a wealth of personal information about them, according to Radii Media: including their full name, court case number, ID card number, home address, and the reason they’re on the list. If the user thinks that the laolai can afford to pay back their debt, but are simply neglecting to do so, they can then report them.

"It's a part of our measures to enforce our rulings and create a socially credible environment," a court spokesman said of the app. It’s also part of China’s broader “social credit” system, whereby citizens are awarded a score based on their “behaviour and trustworthiness”, Wired reports. Acts such as jaywalking, playing music loudly on public transport, or, importantly, failing to pay a court bill will all lower a person’s social credit score—and when one’s score is too low, they lose privileges such as being able to book a flight or a train ticket. Certain reports also speak of a “blacklist” within the system that works in a similar way: if you refuse to pay a fine then you could be blacklisted by the government, who will in turn refuse you certain privileges and creature comforts.

Although work on the social credit scoring algorithm is not yet complete, around 18 million people have already been banned from flying—and 5.5 million from purchasing high-speed train tickets—because of outstanding debts.

Follow Gavin on Twitter or Instagram

Sign up for our newsletter to get the best of VICE delivered to your inbox daily.

More VICE
Vice Channels
 
#####EOF##### Facebook’s Phone Number Policy Could Push Users to Not Trust Two-Factor Authentication - Motherboard

Facebook’s Phone Number Policy Could Push Users to Not Trust Two-Factor Authentication

Users are angry that Facebook is letting others, including advertisers, look up users via the phone numbers they provided to enable two-factor authentication.

|
Mar 4 2019, 5:35pm

Image: Shutterstock

Using two-factor authentication, a security mechanism that requires a second step to login into an account other than the password, is widely considered an essential measure to protect yourself online. Yet, only a small percentage of people use this feature, mostly because it can be burdensome and it’s rarely required by default, leaving users with the responsibility to turn it on.

Now, Facebook may have given people yet another reason not to bother.

Last week, Emojipedia founder Jeremy Burge warned in a viral Twitter thread that anyone could look him up on Facebook using his phone number, which he provided to the social network in order to enable two-factor authentication.

This does not appear to be a new feature. Last year, academic researchers found that if you provided the social network with a phone only for the purpose of turning on two-factor, advertisers could then use that number to target you. In May of last year, Facebook stopped requiring a phone number for two-factor.

But people did not realize this was happening—and are pissed about it, calling it “outrageous,” and “unconscionable.”

What’s worse, it looks like there’s no way to completely remove your phone number that Facebook has collected. If you check your privacy settings, under “Who can look you up using the phone number you provided?” there are only three options: Everyone, Friends of friends, and Friends. “Everyone” is the default.

Even if you remove your phone number from the two-factor authentication settings page, nothing changes in the privacy settings, indicating Facebook still has your phone number.

This screw-up, intentional or not, could discourage adoption of two-factor authentication, leaving people at risk of getting hacked. Facebook’s decision to use phone numbers that were given to it for a specific security purpose for reasons other than security are a betrayal, and is training people more broadly that turning over more personal information to an internet company for security features could backfire.

“Phone number is such a private, important security link,” Zeynep Tufecki, a professor at the University of North Carolina, Chapel Hill, who has worked with dissidents and human rights activists, wrote on Twitter. “But Facebook will even let you be targeted for ads through phone numbers INCLUDING THOSE PROVIDED *ONLY* FOR SECOND FACTOR AUTHENTICATION. Messing with 2FA is the anti-vaccination misinformation of security.”

Got a tip? You can contact this reporter securely on Signal at +1 917 257 1382, OTR chat at lorenzofb@jabber.ccc.de, or email lorenzo@motherboard.tv

Harlo Holmes, a digital security trainer at Freedom of the Press Foundation, said that this is “a picture perfect example of a ‘user being the product.’”

Two-factor authentication “is essential, but you give up a lot of privacy in simply using the service,” she told Motherboard in an online chat.

We reached out to Facebook asking for comment, we will update if we hear back.

According to Alex Stamos, Facebook’s former chief security officer, “there was supposed to be a big project to segregate numbers,” while he was there but it apparently went nowhere.

“This isn’t a mistake now, this is clearly an intentional product choice,” he tweeted.

If you’ve never provided a phone to Facebook, you can still use two-factor authentication with an app that provides you security codes, or a physical USB key, which is even more secure against phishing attacks.

Listen to CYBER, Motherboard’s new weekly podcast about hacking and cybersecurity.

 
#####EOF##### How 1.5 Million Connected Cameras Were Hijacked to Make an Unprecedented Botnet - Motherboard
Image: EFF Photos/Flickr

How 1.5 Million Connected Cameras Were Hijacked to Make an Unprecedented Botnet

As many predicted, hackers are starting to use your Internet of Things to launch cyberattacks.

|
Sep 29 2016, 4:03pm

Image: EFF Photos/Flickr

Last week, hackers forced a well-known security journalist to take down his site after hitting him for more than two days with an unprecedented flood of traffic.

That cyberattack was powered by something the internet had never seen before: an army made of more than one million hacked Internet of Things devices.

The hackers, whose identity is still unknown at this point, used not one, but two networks—commonly referred to as "botnets" in hacking lingo—made of around 980,000 and 500,000 hacked devices, mostly internet-connected cameras, according to Level 3 Communications, one of the world's largest internet backbone providers. The attackers used all those cameras and other unsecured online devices to connect to the journalists' website, pummeling the site with requests in an attempt to make it collapse.

These botnets were allegedly behind the staggering and crippling distributed denial of service attack (DDoS) to KrebsOnSecurity.com, the website of the independent journalist Brian Krebs, who has a long history of exposing DDoS-wielding cybercriminals. The digital assault surpassed 660 Gbps of traffic, making it one of the largest recorded in history in terms of volume.

Read more: The Internet of Things Will Turn Large-Scale Hacks into Real World Disasters

Level 3 has been tracking one of the botnets used against Krebs for about a month, and last week the company saw that hackers used that botnet, along with another smaller one, against Krebs.

"They're still using it against Krebs," Dale Drew, chief security officer at Level 3 Communications, told Motherboard on Wednesday. "As of this morning."

Security researchers and internet defenders are still looking into the attacks and trying to track who's behind them, but people who've been working to protect websites against large denial of service (DDoS) attacks such as this one all agree this was was unprecedented both because of its shocking size and because of the use of what could be called a Botnet of Things.

"This was the biggest attack we've ever seen," Martin McKeay, the senior security advocate for Akamai, the company that was providing protection to Krebs when the attack started last week, told me.

At this point, however, it's unclear if the attackers used the full power of the two botnets or just a portion of it. Drew said that the hackers used around 1.2 million nodes of the total 1.5 million-strong botnets against Krebs. But McKeay, who declined to go into the details of the attacks citing company policies toward customers, said that "nothing" Akamai saw suggests those numbers are "possible." (Akamai, which was providing Krebs with pro-bono protection, decided to let him go when it became too costly to hold off the barrage of traffic.)

"This was the biggest attack we've ever seen."

The attack against Krebs, along with other similar attacks launched across the internet in the last few weeks, might signal the beginning of a new era where criminals use easily hackable Internet of Things devices to censor websites or launch malware attacks—a nightmare scenario that some saw as inevitable.

"We're starting to see the first consequences of these poorly secured devices and the damage they can do when they are compromised," said Matthew Prince, the founder of Cloudflare, a company that offers DDoS protection. "I don't know that many other organizations have seen the full capabilities of this botnet pointed at them. But of course it's inevitable. Whenever the attack on Krebs is over, anyone else on the internet is vulnerable to having this type of attack pointed at them."

The DDoS attack on Krebs was unusual not just because of the sheer size and volume, but because most of the traffic used in was direct. In last few years, hackers have launched large DDoS attacks by tricking faulty servers into boosting their malicious traffic. In these attacks, the servers generate multiple response packet for each packet sent in. They are known as amplification or reflection attacks and essentially give hackers more firepower than they actually have.

In this case, however, whoever is behind the attack really had all that firepower.

"The attackers were not just sending garbage traffic that was easy to tell it didn't belong there," Prince said, "but they were sending relatively legitimate requests."

HOW THE INTERNET OF THINGS ZOMBIE ARMY WAS FORMED

According to Level 3, the larger botnet used against Krebs is made mostly of internet-connected security cameras made by DAHUA Technology, a Chinese manufacturer, with a subsidiary in California, of cameras and DVRs. Level 3 had already revealed the existence of the 1 million-strong botnet in late August.

Drew explained that the hackers found a vulnerability, which affects most of DAHUA's cameras, that allows anyone to take full control of the devices' underlying Linux operating system just by typing a random username with too many characters.

The hackers then planted malware on the devices to turn them into bots and use them for both DDoS attacks as well as for extortion campaigns using ransomware., Drew said. The malware targets specifically Linux devices and is part of a family that previously went by the names Lizkebab, BASHLITE, Torlus and gafgyt, according to Level 3 and others who have been investigating the attacks.

"These cameras are going to be exposed for quite some time."

The hackers used the latest iteration of that malware family, now called Mirai, according to Marshal Webb, the chief technology officer of BackConnect, an anti-DDoS firm.

Mirai appears to be spreading fast. A security researcher put online six virtual machines designed to look like ADSL routers running Linux operating systems just like the ones targeted by Mirai—in other words, a set of honeypots.

It took only an average of 15 minutes for these to get hit with Mirai malware, the researcher, who asked to be referred to as "Jack B." to protect his real identity, told me in an online chat. (If you didn't just say "holy shit," you probably should have.)

DAHUA did not respond to a request for comment. But Drew said that the company has been notified of the vulnerability and is working on a fix. The problem, he said, is that there's no way for DAHUA to remotely fix the flaw, and customers' will have to download new firmware and update the cameras themselves.

"These cameras are going to be exposed for quite some time," Drew said.

The botnet is not just made of DAHUA devices though. Researchers I spoke to also listed other embedded devices such as home routers, and Linux servers.

WHODDUNIT?

The very nature of this kind of attack, whose bogus traffic comes from several sources, makes it hard to pinpoint and unmask who's really behind the keyboard.

In the last few weeks, whoever is behind the attack on Krebs appears to have used the same botnet or botnets in similar attacks against other targets, such as the official site of the Rio Olympics, which was hit with a DDoS clocking in at 540 Gbps, according to Arbor Networks.

That attack used a form of traffic designed to look like Generic Routing Encapsulation (GRE) data packets, an unusual choice of protocol for a DDoS attack. The hackers behind the Krebs attack, as the journalist himself reported, also used GRE traffic.

Also last week, French hosting provider OVH quietly reported of a series of large DDoS attacks, some recording as much as 900 Gbps and 1 Tbps.

OVH declined to comment, and at this point, it's unclear if the attacks on Krebs and OVH are connected.

Some circumstantial evidence seems to point in the direction of groups like Lizard Squad and PoodleCorp, who've made a name for themselves using DDoS attacks to disrupt mostly gaming platforms and websites in the past,

Mirai, the malware allegedly used to build the massive million-strong botnet, for one, is a successor of IoT-infecting malware used by Lizard Squad in the past. But anyone could be using the malware's new iterations.

During the attack last week, a hacker who goes by the name "BannedOffline" on Twitter hinted he was part of the attack in a series of tweets.

But the hacker said he was only one of many attackers.

"I'm not the only one who doesn't like [Krebs] or his site," BannedOffline told me in an online chat. "No one likes him lol. At least in the hacker community."

A hacker who goes by the name Cripthepoodle, and who claimed to be once part of PoodleCorp, said the group was behind the attack.

"They love causing as much as chaos as they can," Cripthepoodle told me.

Last week, when Krebs disclosed that his site was temporarily shutting down, PoodleCorp seemed to poke fun at him in a now-deleted tweet send by its semi-official Twitter account. Of course, this is most likely a jab at Krebs, who regularly reports and exposes hacktivist groups.

Whoever is behind these attacks, in any case, is likely being hunted not just by researchers, but also law enforcement. (The FBI declined to comment on whether the bureau is investigating these attacks.)

The attack on Krebs' website was so powerful, according to Prince and Level 3, that it congested some internet routes, spilling over the effects of the DDoS to some parts of the internet. While this might not have been noticed by people watching Netflix or checking their email, it was certainly noticed by internet service providers and likely the authorities.

"When you launch an attack which is large enough that it starts to impact internet infrastructure, it's not long before you get caught," Prince said.

Even if the hackers behind the attacks get caught, these massive DDoS attacks wielding infected Internet of Things could just be the first in a long series, as other criminals will see them as an inspiration.

"I'm certain that there are other smart 15-year-old kids rounding up botnets of CCTV cameras that they can compromise and control," Prince said.

Or, as Akamai's McKeay put it, this is "a bad sign for the internet."

Correction: a previous version of this article stated that DAHUA is an American company. In fact, it is a Chinese company, with a subsidiary in the US.

Get six of our favorite Motherboard stories every day by signing up for our newsletter.

 
#####EOF##### Researchers Used Sonar Signal From a Smartphone Speaker to Steal Unlock Passwords - Motherboard

Researchers Used Sonar Signal From a Smartphone Speaker to Steal Unlock Passwords

Researchers at Lancaster University have used an active acoustic side-channel attack to steal smartphone passwords for the first time.

|
Sep 4 2018, 12:00pm

Image: Shutterstock

On Thursday, a group of researchers from Lancaster University posted a paper to arXiv that demonstrates how they used a smartphone’s microphone and speaker system to steal the device’s unlock pattern.

Although the average person doesn’t have to worry about getting hacked this way any time soon, the researchers are the first to demonstrate that this kind of attack is even possible. According to the researchers, their “SonarSnoop” attack decreases the number of unlock patterns an attacker must try by 70 percent and can be performed without the victim ever knowing they’re being hacked.

In the infosec world, a “side-channel attack” is a type of hack that doesn’t exploit weaknesses in the program ultimately being targeted or require direct access to the target information. In the case of SonarSnoop, for example, the information the hacker is looking for is the phone’s unlock password. Instead of brute forcing the password by trying all the possible combinations or looking over the person’s shoulder, SonarSnoop exploits secondary information that will also reveal the password—in this case, the acoustic signature from entering the password on the device.

"SonarSnoop is applicable in any environment where microphones and speakers can interact."

Acoustic side-channel attacks have been widely demonstrated on PCs and a variety of other internet connected devices. For example, researchers have recovered the data from an air gapped computer by listening to it’s hard drive fan. They’ve also been able to determine the contents printed on a piece of paper by an internet-connected printer and reconstructed a printed 3D object based on the sounds of a 3D printer. In most cases, these are passive side-channel attacks, meaning an attacker is just listening for sounds naturally produced by the devices. This is the first time, however, that researchers have successfully demonstrated an active acoustic side-channel attack on a mobile device, which forces the device itself to emit certain sounds.

The attack begins when a user unwittingly installs a malicious application on their phone. When a user downloads the infected app, their phone begins broadcasting a sound signal that is just above the human range of hearing. This sound signal is reflected by every object around the phone, creating an echo. This echo is then recorded by the phone’s microphone.

By calculating the time between the emission of the sound and the return of its echo to the source, it is possible to determine the location of an object in a given space and whether that object is moving—this is known as sonar. The researchers were able to leverage this phenomenon to track the movement of someone’s finger across a smartphone screen by analyzing the echoes recorded through the device’s microphone.

There are nearly 400,000 possible unlock patterns on the 3x3 swipe grid on Android phones, but prior research has demonstrated that 20 percent of people use one of 12 common patterns. While testing SonarSnoop, the researchers only focused on these dozen unlock combinations.

The 12 most common types of unlock swipe patterns. Image: Peng et al./arXiv

To test their sonar attack, the researchers used a Samsung Galaxy S4, an Android phone first released in 2013. Although this attack should work on any phone model, the signal analysis would have to be tailored to a particular phone model because of the different placement of speakers and microphones. “We expect iPhones are similarly vulnerable, but we only tested our attack on Androids,” Peng Cheng, a doctoral student at Lancaster University told me in an email.

Ten volunteers were recruited for the study and were asked to draw each of the 12 patterns five different times on a custom app. The researchers then tried a variety of sonar analysis techniques to reconstruct the password based on the acoustic signatures emitted by the phone. The best analysis technique resulted in the algorithm only having to try 3.6 out of the 12 possible patterns on average before it correctly determined the pattern.

Read More: PC Hardware Is Physically Leaking Your Encryption Keys

Although the SonarSnoop attack isn’t perfect, it reduces the number of patterns the researchers would have to try by up to 70 percent. In the future, the researchers wrote that it may be possible to improve on this by reduce the amount of time between sonar pulses as well as exploring different signal analysis strategies.

To prevent these types of attacks from proliferating in the wild, the researchers suggested that mobile devices could be designed to prevent them. The most obvious way of doing this is by limiting the acoustic range of a device’s speakers to only human-audible signals or allowing users to selectively turn off their sound system if they are engaging with sensitive information on their device. Or, continuing to improve protections against the downloading of malicious applications in the first place.

As biometric features such as fingerprint unlocks become increasingly common on mobile devices, the usefulness of this attack for unlocking phones will diminish significantly. Yet as the researchers noted, similar techniques could be used to glean other sensitive information entered using a phone’s touch screen, such as web passwords or even swipe patterns on dating apps like Tinder.

“Although our experiment tried to steal only Android unlock patterns, SonarSnoop is applicable in any environment where microphones and speakers can interact,” Jeff Yan, a security researcher at Lancaster University told me in an email. “Our next big question is more about helping with everyday people. We’d like them to have a peaceful mind with our attacks and we aim to achieve that by helping computer engineers properly address the security threats in next-generation devices.”

 
#####EOF##### We Need to Save the Internet from the Internet of Things - Motherboard

We Need to Save the Internet from the Internet of Things

Long term, we need to build an internet that is resilient against IoT-based attacks. But that's a long time coming.

|
Oct 6 2016, 7:30pm

Image: Shutterstock

Brian Krebs is a popular reporter on the cybersecurity beat. He regularly exposes cybercriminals and their tactics, and consequently is regularly a target of their ire. Last month, he wrote about an online attack-for-hire service that resulted in the arrest of the two proprietors. In the aftermath, his site was taken down by a massive DDoS attack.

In many ways, this is nothing new. Distributed denial-of-service attacks are a family of attacks that cause websites and other internet-connected systems to crash by overloading them with traffic. The "distributed" part means that other insecure computers on the internet—sometimes in the millions—are recruited to a botnet to unwittingly participate in the attack. The tactics are decades old; DDoS attacks are perpetrated by lone hackers trying to be annoying, criminals trying to extort money, and governments testing their tactics. There are defenses, and there are companies that offer DDoS mitigation services for hire.

Basically, it's a size vs. size game. If the attackers can cobble together a fire hose of data bigger than the defender's capability to cope with, they win. If the defenders can increase their capability in the face of attack, they win.

What was new about the Krebs attack was both the massive scale and the particular devices the attackers recruited. Instead of using traditional computers for their botnet, they used CCTV cameras, digital video recorders, home routers, and other embedded computers attached to the internet as part of the Internet of Things.

Much has been written about how the IoT is wildly insecure. In fact, the software used to attack Krebs was simple and amateurish. What this attack demonstrates is that the economics of the IoT mean that it will remain insecure unless government steps in to fix the problem. This is a market failure that can't get fixed on its own.

The IoT will remain insecure unless government steps in and fixes the problem.

Our computers and smartphones are as secure as they are because there are teams of security engineers working on the problem. Companies like Microsoft, Apple, and Google spend a lot of time testing their code before it's released, and quickly patch vulnerabilities when they're discovered. Those companies can support such teams because those companies make a huge amount of money, either directly or indirectly, from their software—and, in part, compete on its security. This isn't true of embedded systems like digital video recorders or home routers. Those systems are sold at a much lower margin, and are often built by offshore third parties. The companies involved simply don't have the expertise to make them secure.

Even worse, most of these devices don't have any way to be patched. Even though the source code to the botnet that attacked Krebs has been made public, we can't update the affected devices. Microsoft delivers security patches to your computer once a month. Apple does it just as regularly, but not on a fixed schedule. But the only way for you to update the firmware in your home router is to throw it away and buy a new one.

The security of our computers and phones also comes from the fact that we replace them regularly. We buy new laptops every few years. We get new phones even more frequently. This isn't true for all of the embedded IoT systems. They last for years, even decades. We might buy a new DVR every five or ten years. We replace our refrigerator every 25 years. We replace our thermostat approximately never. Already the banking industry is dealing with the security problems of Windows 95 embedded in ATMs. This same problem is going to occur all over the Internet of Things.

The market can't fix this because neither the buyer nor the seller cares. Think of all the CCTV cameras and DVRs used in the attack against Brian Krebs. The owners of those devices don't care. Their devices were cheap to buy, they still work, and they don't even know Brian. The sellers of those devices don't care: they're now selling newer and better models, and the original buyers only cared about price and features. There is no market solution because the insecurity is what economists call an externality: it's an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

What this all means is that the IoT will remain insecure unless government steps in and fixes the problem. When we have market failures, government is the only solution. The government could impose security regulations on IoT manufacturers, forcing them to make their devices secure even though their customers don't care. They could impose liabilities on manufacturers, allowing people like Brian Krebs to sue them. Any of these would raise the cost of insecurity and give companies incentives to spend money making their devices secure.

Of course, this would only be a domestic solution to an international problem. The internet is global, and attackers can just as easily build a botnet out of IoT devices from Asia as from the United States. Long term, we need to build an internet that is resilient against attacks like this. But that's a long time coming. In the meantime, you can expect more attacks that leverage insecure IoT devices.

 
#####EOF##### This Leaked Catalog Offers ‘Weaponized Information’ That Can Flood the Web - Motherboard
Image: exdez/Getty

This Leaked Catalog Offers ‘Weaponized Information’ That Can Flood the Web

For €2,500 a day, governments could buy large scale astroturf campaigns, and for €1 million, services to create false criminal charges.

|
Sep 2 2016, 2:50pm

Image: exdez/Getty

In the summer of 2014, a little known boutique contractor from New Delhi, India, was trying to crack into the lucrative $5 billion a year market of outsourced government surveillance and hacking services.

To impress potential customers, the company, called Aglaya, outlined an impressive—and shady—series of offerings in a detailed 20-page brochure. The brochure, obtained by Motherboard, offers detailed insight into purveyors of surveillance and hacking tools who advertise their wares at industry and government-only conferences across the world.

The leaked brochure, which had never been published before, not only exposes Aglaya's questionable services, but offers a unique glimpse into the shadowy backroom dealings between hacking contractors, infosecurity middlemen, and governments around the world which are rushing to boost their surveillance and hacking capabilities as their targets go online.

Read more: The Hacking Team Defectors

The sales document also outlines how commonplace commercial spy tools have become. For €3,000 per license, the company offered Android and iOS spyware, much like the malware offered in the past by the likes of Hacking Team, FinFisher, and, more recently, the NSO Group, whose iPhone-hacking tool was just caught in the wild last week. For €250,000, the company claimed it could track any cell phone in the world.

These were standard services offered by a plethora of companies who often peddle their wares at ISS World, an annual series of conferences that are informally known as the "Wiretappers' Ball."

But Aglaya had much more to offer, according to its brochure. For eight to 12 weeks campaigns costing €2,500 per day, the company promised to "pollute" internet search results and social networks like Facebook and Twitter "to manipulate current events." For this service, which it labelled "Weaponized Information," Aglaya offered "infiltration," "ruse," and "sting" operations to "discredit a target" such as an "individual or company."

"[We] will continue to barrage information till it gains 'traction' & top 10 search results yield a desired results on ANY Search engine," the company boasted as an extra "benefit" of this service.

Aglaya also offered censorship-as-a-service, or Distributed Denial of Service (DDoS) attacks, for only €600 a day, using botnets to "send dummy traffic" to targets, taking them offline, according to the brochure. As part of this service, customers could buy an add-on to "create false criminal charges against Targets in their respective countries" for a more costly €1 million.

Also starting at €1 million, customers could purchase a "Cyber Warfare Service" to attack "manufacturing" plants, the "power grid," "critical network infrastructure," and even satellites and airplanes. Aglaya even claimed to sell unknown flaws, or zero-days, in Siemens industrial control systems for €2 million.

Some of Aglaya's offerings, according to experts who reviewed the document for Motherboard, are likely to be exaggerated or completely made-up. But the document shows that there are governments interested in these services, which means there will be companies willing to fill the gaps in the market and offer them.

"Some of this stuff is really, really, sketchy," Christopher Soghoian, the principal technologist at the American Civil Liberties Union, who has followed the booming market of surveillance tech vendors for years, told Motherboard. "When you're offering the ability to attack satellites and airplanes, this is not lawful intercept. This is basically 'whatever you want we'll try to do it.' These guys are clearly mercenaries, what's not clear is if they can deliver on their promises. This is not a company pretending that it's solely focusing on the lawful intercept market, this is outsourcing cyber operations."

Ankur Srivastava, the CEO and founder of Aglaya, did not deny that the brochure is legitimate, only saying this particular product sheet was passed on only to "one particular customer."

"These products are not on our web site, with our customers and nor do they represent the vision of our product portfolio," Srivastava said in an email. "This was a custom proposal for one customer only and was not pursued since the relationship did not come to fruition."

Srivastava added that he regretted attending ISS because Aglaya was never able to close a deal and sell its services. He also claimed that the company doesn't offer those kind of services anymore. (One of the organizers of ISS World did not respond to a request for comment, asking whether the conference vetted or condoned companies offering such services.)

"I would go the distance to aim to convince you that we are not a part of this market and unintentionally underwent a marketing event at the wrong trade-show," he added.

When asked a series of more detailed questions, however, Srivastava refused to elaborate, instead reiterating that Aglaya never did any business as a government hacking contractor and that attending ISS was "an exercise of time and money, albeit, in futility." He complained that his company's failure was likely due to the fact that it is not based "in the West," hypothesizing that most customers want "western" suppliers.

Asked for the identity of the potential customer who showed interest for these services, Srivastava said he did not know, claiming he only dealt with a reseller, an "agent" from South America who "claimed to have global connections" and "was interested in anything and everything."

The document itself doesn't offer any clues as to the country interested. But Latin American governments such as the ones in Mexico and Ecuador are known to have used Twitter bots and other tactics to launch disinformation campaigns online, much like the ones Aglaya was offering. Mexico, moreover, is a well-known big-spender when it comes to buying off-the-shelf spyware made by the likes of Hacking Team and FinFisher.

"I would go the distance to aim to convince you that we are not a part of this market."

Srivastava also dodged questions about his company's spyware products. But a source who used to work in the surveillance tech industry, who asked to remain anonymous to discuss sensitive issues, claimed to have seen a sample of Aglaya's malware in the wild.

"It was crap," the source said. "The code was full of references to Aglaya."

One of his customers was targeted with it at the end of last year, when he received a new phone via mail, under the pretense that he had won a contest that turned out to be made up, according to the source. As ridiculous as this might be, this is actually how Aglaya targeted victims, given that they couldn't admittedly get around Apple's security measures and jailbreak the device to infect it with malware.

This sloppy workaround was described in an article in the spyware trade publication Insider Surveillance.

"For installation, Aglaya iOS Backdoor requires an unattended phone and a passcode," the article read. "By 'unattended' we're hoping they mean 'idle,' not 'impounded.' Or that they're not expecting agents to sneak into the target's bedroom to plant the malware...or wait for him to divulge the password while talking in his sleep."

The anonymous source, in any case, said that there is certainly a market for the services offered by Aglaya, including the sketchier ones.

"I think it's credible that there is interest for these type of services at least in certain countries in the Middle East," the source said.

Another source, who also requested anonymity to speak freely, said that an Aglaya representative once claimed that his company had customers in the Middle East. The source also said that Aglaya's claims of having abandoned the surveillance tech business are "a lie," adding that he has seen an updated version of that brochure last year.

Aglaya might have some customers, but it's likely a small fish in the surveillance and hacking business. There are certainly many more companies, likely with better services and more customers, that we don't know about. We also might never know about them, unless they get caught because customers abuse their tools—as in the cases of NSO Group and Hacking Team—or their marketing materials leak online.

Often, these companies peddle both defensive and offensive services. Srivastava, after dodging most of my questions, offered to let Motherboard take a look at Aglaya's latest product, dubbed SpiderMonkey, a device that detects "Stingrays" or IMSI-catchers, the surveillance gizmos used by police and intelligence around the world to track and intercept cellphone data.

"Please do keep us in mind," he said, likely repeating a line that he told his unknown "one" customer two years ago.

Want more Motherboard in your life? Then sign up for our daily newsletter.

 
#####EOF##### Bruce Schneier Explains Why Patching Is Failing as a Security Strategy - Motherboard
Image: Shutterstock

Patching Is Failing as a Security Paradigm

Many of the most damaging hacks in recent history were only possible because someone failed to update software.

|
Nov 16 2018, 12:30pm

Image: Shutterstock

The Weakest Link is Motherboard's third annual theme week dedicated to the future of hacking and cybersecurity. Follow along.

Listen to Motherboard’s new hacking podcast, CYBER, here.


The following is an excerpted chapter from the Bruce Schneier's book, Click Here to Kill Everybody: Security and Survival in a Hyper-connected World.

There are two basic paradigms of security. The first comes from the real world of dangerous technologies: the world of automobiles, planes, pharmaceuticals, architecture and construction, and medical devices. It’s the traditional way we do design, and can be best summed up as “Get it right the first time.” This is the world of rigorous testing, of security certifications, and licensed engineers. At the extreme, it’s a slow and expensive process: think of all the safety testing Boeing conducts on its new aircraft, or any pharmaceutical company conducts before releasing a new drug in the market. It’s also the world of slow and expensive changes, because each change has to go through the same process.

We do this because the costs of getting it wrong are so great. We don’t want buildings collapsing on us, planes falling out of the sky, or thousands of people dying from a pharmaceutical’s side effects or drug interaction. And while we can’t eliminate all those risks completely, we can mitigate them by doing a lot of up- front work.

The alternative security paradigm comes from the fast- moving, freewheeling, highly complex, and heretofore largely benign world of software. Its motto is “Make sure your security is agile” or, in Facebook lingo, “Move fast and break things.” In this model, we try to make sure we can update our systems quickly when security vulnerabilities are discovered. We try to build systems that are survivable, that can recover from attack, that actually mitigate attacks, and that adapt to changing threats. But mostly we build systems that we can quickly and efficiently patch. We can argue how well we achieve these goals, but we accept the problems because the cost of getting it wrong isn’t that great.

There are undiscovered vulnerabilities in every piece of software.

In a world where we increasingly rely on internet-connected devices, these two paradigms are colliding. They’re colliding in your cars. They’re colliding in home appliances. They’re colliding in computerized medical devices. They’re colliding in home thermostats, computerized voting machines, and traffic control systems— and in our chemical plants, dams, and power plants. They’re colliding again and again, and the stakes are getting higher because failures can affect life and property.

Patching is something we all do all the time with our software— we usually call it “updating”— and it’s the primary mechanism we have to keep our systems secure. How it works (and doesn’t), and how it will fare in the future, is important to understand in order to fully appreciate the security challenges we face.

There are undiscovered vulnerabilities in every piece of software. They lie dormant for months and years, and new ones are discovered all the time by everyone from companies to governments to independent researchers to cybercriminals. We maintain security through (1) discoverers disclosing a found vulnerability to the software vendor and the public, (2) vendors quickly issuing a security patch to fix the vulnerability, and (3) users installing that patch.

It took us a long time to get here. In the early 1990s, researchers would disclose vulnerabilities to the vendors only. Vendors would respond by basically not doing anything, maybe getting around to fixing the vulnerabilities years later. Researchers then started publicly announcing that they had found a vulnerability, in an effort to get vendors to do something about it— only to have the vendors belittle them, declare their attacks “theoretical” and not worth worrying about, threaten them with legal action, and continue to not fix anything. The only solution that spurred vendors into action was for researchers to publish details about the vulnerability. Today, researchers give software vendors advance warning when they find a vulnerability, but then they publish the details. Publication has become the stick that motivates vendors to quickly release security patches, as well as the means for researchers to learn from each other and get credit for their work; this publication further improves security by giving other researchers both knowledge and incentive. If you hear the term “responsible disclosure,” it refers to this process.

Lots of researchers—from lone hackers to academic researchers to corporate engineers—find and responsibly disclose vulnerabilities. Companies offer bug bounties to hackers who bring vulnerabilities to them instead of publishing those vulnerabilities or using them to commit crimes. Google has an entire team, called Project Zero, devoted to finding vulnerabilities in commonly used software, both public- domain and proprietary. You can argue with the motivations of these researchers—many are in it for the publicity or competitive advantage—but not with the results. Despite the seemingly endless stream of vulnerabilities, any piece of software becomes more secure as they are found and patched.

It’s not happily ever after, though. There are several problems with the find-and-patch system. Let’s look at the situation in terms of the entire ecosystem—researching vulnerabilities, disclosing vulnerabilities to the manufacturer, writing and publishing patches, and installing patches— in reverse chronological order.

Installing patches: I remember those early years when users, especially corporate networks, were hesitant to install patches. Patches were often poorly tested, and far too often they broke more than they fixed. This was true for everyone who released software: operating system vendors, large software vendors, and so on. Things have changed over the years. The big operating system organizations—Microsoft, Apple, and Linux in particular—have become much better about testing their patches before releasing them. As people have become more comfortable with patches, they have become better about installing them more quickly and more often. At the same time, vendors are now making patches easier to install.

Still, not everyone patches their systems. The industry rule of thumb is that a quarter of us install patches on the day they’re issued, a quarter within the month, a quarter within the year, and a quarter never do. The patch rate is even lower for military, industrial, and healthcare systems because of how specialized the software is. It’s more likely that a patch will break some critical functionality.

People who are using pirated copies of software often can’t get updates. Some people just don’t want to be bothered. Others forget. Some people don’t patch because they’re tired of vendors slipping unwanted features and software into updates. Some IoT systems are just harder to update. How often do you update the software in your router, refrigerator, or microwave? Never is my guess. And no, they don’t update automatically.

Three 2017 examples illustrate the problem. Equifax was hacked because it didn’t install a patch for its Apache web server that had been available two months previously. The WannaCry malware was a worldwide scourge, but it only affected unpatched Windows systems. The Amnesia IoT botnet made use of a vulnerability in digital video recorders that had been disclosed and fixed a year earlier, but existing machines couldn’t be patched.

The situation is worse for the computers embedded in IoT devices. In a lot of systems—both low-cost and expensive—users have to manually download and install relevant patches. Often the patching process is tedious and complicated, and beyond the skill of the average user. Sometimes, ISPs have the ability to remotely patch things like routers and modems, but this is also rare. Even worse, many embedded devices don’t have any way to be patched. Right now, the only way for you to update the firmware in your hackable DVR is to throw it away and buy a new one.

At the low end of the market, the result is hundreds of millions of devices that have been sitting on the Internet, unpatched and insecure, for the last five to ten years. In 2010, a security researcher analyzed 30 home routers and was able to break into half of them, including some of the most popular and common brands. Things haven’t improved since then.

Hackers are starting to notice. The malware DNSChanger attacks home routers, as well as computers. In Brazil in 2012, 4.5 million DSL routers were compromised for purposes of financial fraud. In 2013, a Linux worm targeted routers, cameras, and other embedded devices. In 2016, the Mirai botnet used vulnerabilities in digital video recorders, webcams, and routers; it exploited such rookie security mistakes as devices having default passwords.

The difficulty of patching also plagues expensive IoT devices that you might expect to be better designed. In 2015, Chrysler recalled 1.4 million vehicles to patch a security vulnerability. The only way to patch them was for Chrysler to mail every car owner a USB drive to plug into a port on the vehicle’s dashboard. In 2017, Abbott Labs told 465,000 pacemaker patients that they had to go to an authorized clinic for a critical security update. At least the patients didn’t have to have their chests opened up.

This is likely to be a temporary problem, at least for more expensive devices. Industries that aren’t used to patching will learn how to do it. Companies selling expensive equipment with embedded computers will learn how to design their systems to be patched automatically. Compare Tesla to Chrysler: Tesla pushes updates and patches to cars automatically, and updates the systems overnight. Kindle does the same thing: owners have no control over the patching process, and usually have no idea that their devices have even been patched.

Writing and publishing patches: Vendors can be slow to release security patches. One 2016 survey found that about 20 percent of all vulnerabilities—and 7 percent of vulnerabilities in the “top 50 applications”—did not have a patch available the same day the vulnerability was disclosed. (To be fair, this is an improvement over previous years. In 2011, a third of all vulnerabilities did not have a patch available on the day of disclosure.) Even worse, only an additional 1 percent were patched within a month after disclosure, indicating that if a vendor doesn’t patch immediately, it’s not likely to get to it anytime soon. Android users, for example, often have to wait months after Google issues a patch before their handset manufacturers make that patch available to users. The result is that about half of all Android phones haven’t been patched in over a year.

Patches also aren’t as reliable as we would like them to be; they still occasionally break the systems they’re supposed to be fixing. In 2014, an iOS patch left some users unable to get a cell signal. In 2017, a flawed patch to Internet- enabled door locks by Lockstate bricked the devices, leaving users unable to lock or unlock their doors. In 2018, in response to the Spectre and Meltdown vulnerabilities in computer CPUs, Microsoft issued a patch to its operating system that bricked some computers. There are more examples.

If we turn to embedded systems and IoT devices, the situation is much more dire. Our computers and smartphones are as secure as they are because there are teams of security engineers dedicated to writing patches. The companies that make these devices can support such big teams because they make a huge amount of money, either directly or indirectly, from their software—and, in part, compete on its security. This isn’t true of embedded systems like digital video recorders or home routers. Those systems are sold at a much lower margin and in much smaller quantities, and are often designed by offshore third parties. Engineering teams assemble quickly to design the products, then disband or go build something else. Parts of the code might be old and out- of- date, reused again and again. There might not be any source code available, making it much harder to write patches. The companies involved simply don’t have the budget to make their products secure, and there’s no business case for them to do so.

We’re already seeing the effects of systems so old that the vendors stopped patching them, or went out of business altogether.

Even worse, no one has the incentive to patch the software once it’s been shipped. The chip manufacturer is busy shipping the next version of the chip, the device manufacturer is busy upgrading its product to work with this next chip, and the vendor with its name on the box is just a reseller. Maintaining the older chips and products isn’t a priority for anyone.

Even when manufacturers have the incentive, there’s a different problem. If there’s a security vulnerability in Microsoft operating systems, the company has to write a patch for each version it supports. Maintaining lots of different operating systems gets expensive, which is why Microsoft and Apple— and everyone else— support only the few most recent versions. If you’re using an older version of Windows or macOS, you won’t get security patches, because the companies aren’t creating them anymore.

This won’t work with more durable goods. We might buy a new DVR every 5 or 10 years, and a refrigerator every 25 years. We drive a car we buy today for a decade, sell it to someone else who drives it for another decade, and that person sells it to someone who ships it to a Third World country, where it’s resold yet again and driven for yet another decade or two. Go try to boot up a 1978 Commodore PET computer, or try to run that year’s VisiCalc, and see what happens; we simply don’t know how to maintain 40-year-old software.

Consider a car company. It might sell a dozen different types of cars with a dozen different software builds each year. Even assuming that the software gets updated only every two years and the company supports the cars for only two decades, the company needs to maintain the capability to update 20 to 30 different software versions. (For a company like Bosch that supplies automotive parts for many different manufacturers, the number would be more like 200.) The expense and warehouse size for the test vehicles and associated equipment would be enormous.

Alternatively, imagine if car companies announced that they would no longer support vehicles older than five, or ten, years. There would be serious environmental consequences.

We’re already seeing the effects of systems so old that the vendors stopped patching them, or went out of business altogether. Some of the organizations affected by WannaCry were still using Windows XP, an unpatchable 17-year-old operating system that Microsoft stopped supporting in 2014. About 140 million computers worldwide still run that operating system, including most ATMs. A popular shipboard satellite communications system once sold by Inmarsat Group is no longer patched, even though it contains critical security vulnerabilities. This is a big problem for industrial- control systems, because many of them run outdated software and operating systems, and upgrading them is prohibitively expensive because they’re very specialized. These systems can stay in operation for many years and often don’t have big IT budgets associated with them.

The current system of patching is going to be increasingly inadequate as computers become embedded in more and more things. The problem is that we have nothing better to replace it with.

Certification exacerbates the problem. Before everything became a computer, dangerous devices like cars, airplanes, and medical devices had to go through various levels of safety certification before they could be sold. A product, once certified, couldn’t be changed without having to be recertified. For an airplane, it can cost upwards of a million dollars and take a year to change one line of code. This made sense in the analog world, where products didn’t change much. But the whole point of patching is to enable products to change, and change quickly.

Disclosing vulnerabilities: Not everyone discloses security vulnerabilities when they find them; some hoard them for offensive purposes. Attackers use them to break into systems, and that’s the first time we learn of them. These are called “zero-day vulnerabilities,” and responsible vendors try to quickly patch them as well. Government agencies like the NSA, US Cyber Command, and their foreign equivalents also keep some vulnerabilities secret for their own present and future use. Every discovered but undisclosed vulnerability— even if it is kept by someone you trust— can be independently discovered and used against you.

Even researchers who want to disclose the vulnerabilities they discover sometimes find a chilly reception from the device manufacturers. Those new industries getting into the computer business—the coffeepot manufacturers and their ilk—don’t have experience with security researchers, responsible disclosure, and patching, and it shows. This lack of security expertise is critical. Software companies write software as their core competency. Refrigerator manufacturers, or refrigerator divisions of larger companies, have a different core competency—presumably, keeping food cold—and writing software is always going to be a sideline.

Just like the computer vendors of the 1990s, IoT manufacturers tout the unbreakability of their systems, deny any problems that are exposed, and threaten legal action against those who expose any problems. The 2017 Abbott Labs patch came a year after the company called the initial report of the security vulnerability— published without details of the attack— “false and misleading.” That might be okay for computer games or word processors, but it is dangerous for cars, medical devices, and airplanes— devices that can kill people if bugs are exploited. But should the researchers have published the details anyway? No one knows what responsible disclosure looks like in this new environment.

Finally, researching vulnerabilities: In order for this ecosystem to work, we need security researchers to find vulnerabilities and improve security, and a law called the Digital Millennium Copyright Act (DMCA) is blocking those efforts. It’s an anti-copying law that includes a prohibition against security research. Technically, the prohibition is against circumventing product features intended to deter unauthorized reproduction of copyrighted works. But the effects are broader than that. Because of the DMCA, it’s against the law to reverse engineer, locate, and publish vulnerabilities in software systems that protect copyright. Since software can be copyrighted, manufacturers have repeatedly used this law to harass and muzzle security researchers who might embarrass them.

One of the first examples of such harassment took place in 2001. The FBI arrested Dmitry Sklyarov at the DefCon hackers conference for giving a presentation describing how to bypass the encryption code in Adobe Acrobat that was designed to prevent people from copying electronic books. Also in 2001, HP used the law to threaten researchers who published security flaws in its Tru64 product. In 2011, Activision used it to shut down the public website of an engineer who had researched the security system in one of its video games. There are many examples like this.

In 2016, the Library of Congress—seriously, that’s who’s in charge of this—added an exemption to the DMCA for security researchers, but it’s a narrow exemption that’s temporary and still leaves a lot of room for harassment.

Other laws are also used to squelch research. In 2008, the Boston MBTA used the Computer Fraud and Abuse Act to block a conference presentation on flaws in its subway fare cards. In 2013, Volkswagen sued security researchers who had found vulnerabilities in its automobile software, preventing them from being disclosed for two years. And in 2016, the Internet security company FireEye obtained a court injunction against publication of the details of FireEye product vulnerabilities that had been discovered by third parties.

The chilling effects are substantial. Lots of security researchers don’t work on finding vulnerabilities, because they might get sued and their results might remain unpublished. If you’re a young academic concerned about tenure, publication, and avoiding lawsuits, it’s just safer not to risk it.

For all of these reasons, the current system of patching is going to be increasingly inadequate as computers become embedded in more and more things. The problem is that we have nothing better to replace it with.

This gets us back to the two paradigms: getting it right the first time, and fixing things quickly when problems arise.

These have parallels in the software development industry. “Waterfall” is the term used for the traditional model for software development: first come the requirements; then the specifications; then the design; then the implementation, testing, and fielding. “Agile” describes the newer model for software development: build a prototype to meet basic customer needs; see how it fails; fix it quickly; update requirements and specifications; repeat again and again. The agile model seems to be a far better way of doing software design and development, and it can incorporate security design requirements, as well as functional design requirements.

You can see the difference in Microsoft Office versus the apps on your smartphone. A new version of Microsoft Office happens once every few years, and it is a major software development effort resulting in many design changes and new features. A new version of an iPhone app might be released every other week, each with minor incremental changes and occasionally a single new feature. Microsoft might use agile development processes internally, but its releases are definitely old- school.

We need to integrate the two paradigms. We don’t have the requisite skill in security engineering to get it right the first time, so we have no choice but to patch quickly. But we also have to figure out how to mitigate the costs of the failures inherent in this paradigm. Because of the inherent complexity of the internet and internet-connected devices, we need both the long-term stability of the waterfall paradigm and the reactive capability of the agile paradigm.

Excerpted from Click Here to Kill Everybody by Bruce Schneier. Copyright © 2018 by Bruce Schneier. With permission of the publisher, W. W. Norton & Company, Inc. All rights reserved.

 
#####EOF##### #####EOF##### Content Funding on VICE - VICE
 
#####EOF##### Streit um Uploadfilter und #Artikel13: Wie Axel Voss das Internet sieht - VICE
Bilder: imago | Christian Spicker | Future Image

Streit um Uploadfilter: Wie Axel Voss das Internet sieht

Er boxt eine Reform durchs EU-Parlament, obwohl Tausende dagegen protestieren. Im VICE-Interview erzählt Axel Voss (CDU), warum er das noch immer für eine schlaue Idee hält.

|
19 März 2019, 1:51pm

Bilder: imago | Christian Spicker | Future Image

Während in deutschen Großstädten seit Wochen Tausende Menschen für das freie Internet protestieren, boxt der CDU-Politiker Axel Voss beharrlich die wohl umstrittenste EU-Reform der vergangenen Jahre durch das Europäische Parlament. Als zuständiger Berichterstatter gilt Voss als Vater der Reform – und als Feindbild vieler Kritikerinnen.

Erst einmal ist es natürlich eine gute Idee, ein altes Gesetz wie das Urheberrecht ins digitale Zeitalter zu überführen. Den aktuellen Entwurf halten Kritiker, Expertinnen und Journalisten aber für eine Katastrophe für die Netzkultur. Inzwischen warnen auch zahlreiche deutsche Politikerinnen davor.

Das Hauptproblem: Die geplante Reform sieht vor, dass Online-Plattformen selbst verantwortlich sind, wenn Beiträge von Nutzerinnen gegen das Urheberrecht verstoßen – und das entsprechend verhindern müssen. Experten sind sich einig, dass sich diese Aufgabe nur mit Uploadfiltern lösen lässt. Die Folge wäre, dass weitaus mehr Beiträge als eigentlich nötig blockiert werden, ein Riesenproblem für Parodien, Remixe und Memes.

Falls sich die EU bei der finalen Abstimmung im Parlament, voraussichtlich Ende März, auf die Reform einigt, muss der Filter ins deutsche Recht überführt werden. Das ist besonders heikel, weil die Regierung damit den Koalitionsvertrag zwischen SPD, CDU und CSU brechen würde. Dort steht nämlich: "Eine Verpflichtung von Plattformen zum Einsatz von Upload-Filtern lehnen wir als unverhältnismäßig ab." Die CDU hat inzwischen angekündigt, dass bei der Umsetzung der Reform in deutsches Recht Uploadfilter verhindert werden sollen. Experten halten diesen Vorschlag aber für "absurd".

Die Kritik an Uploadfiltern scheint den CDU-Politiker Voss nicht zu verunsichern. Im VICE-Interview erklärt Voss, warum er sich für die technische Umsetzung der EU-Reform nicht verantwortlich fühlt – und warum er glaubt, die Proteste gegen Artikel 13 seien von Tech-Konzernen gesteuert worden.

VICE: Herr Voss, welche Dienste im Internet nutzen Sie?
Axel Voss: Meinen Sie Websites? Ich nutze schon auch Apps. WhatsApp, Webbrowser, Messengerdienste, mal Facebook, Twitter. Sofern ich nicht gerade beschimpft werde.

Einige Nutzerinnen scheinen Sie fast schon zu hassen. Warum, denken Sie, wird die Debatte um die EU-Reform so leidenschaftlich geführt?
Mir scheint, viele betrachten die Möglichkeiten des Digitalen als Lebensinhalt. Vielleicht sind es auch diejenigen, die meinen, dass man mit der Urheberrechtsreform Kulturgüter wie Memes und Gifs zerstören würde. Aber das stimmt nicht.

Kritiker der Reform befürchten, dass künftige Uploadfilter Remixe, Memes und Parodien nicht von Urheberrechtsverletzungen unterscheiden können und blockieren. Stimmt das?
Nein. Man denkt, das wird dann alles geblockt. Aber das funktioniert alles weiterhin. Das haben wir über Ausnahmen geregelt. Wir wollen damit nur erreichen, dass Plattformen mehr Lizenzen mit Kreativen abschließen und sonst für Verstöße haften. Artikel 13 ist der Versuch, das Urheberrecht auch wirklich umzusetzen. Es bleibt aber immer eine Entscheidung der Plattform, was sie drauf lässt und was nicht.

"Ich bin kein Techniker"

Könnten Sie die Ausnahmen kurz zusammenfassen?
Aktuell sind nur Plattformen mit einer großen Anzahl an Nutzern betroffen. Und es geht nur um die Seiten, die ein Businessmodell mit urheberrechtlich geschützten Werken haben. Datingplattformen, Handelsplattformen, Nachbarschaftsplattformen sind nicht betroffen. Wenn ich meinem Nachbarn auf einer Plattform den neuesten Song von Shakira vorspielen will, dann fällt das immer noch unter die Ausnahmen.

Proteste gegen Artikel 13
"Freiheit statt Voss" steht auf diesem Plakat einer Demo in Berlin am 2. März 2019 | Bild: imago | IPON

Für Urheberrechte gibt es bereits eine technische Lösung von YouTube. Das Content-ID-System erkennt urheberrechtlich geschütztes Material. Doch das System hat immer wieder Probleme mit Parodien, Zitaten und Remixen. Können Sie nochmal kurz erklären, warum sich diese Probleme durch die EU-Reform nicht verstärken?
Wir sprechen doch gerade über eine rechtliche Lösung. In Zukunft können die Rechteinhaber einfach ihre Infos zu Lizenzen bei den Plattformen hinterlegen, wie das zum Beispiel jetzt schon Filmstudios machen. Das macht eine faire Vergütung möglich. Wenn ich sage, "Ich will mein Geschäftsmodell auf Diebstahl aufbauen", dann muss ich eben auch fair vergüten. Das hat YouTube mit seiner Quasi-Monopolstellung bislang nicht gemacht.

Hinsichtlich der technischen Lösungen sagen wir: Ja, es kann sein, dass was blockiert wird, was nicht blockiert werden soll. Man muss schon davon ausgehen, dass das nicht 100 Prozent funktioniert. Ich bin kein Techniker und kann Ihnen auch nicht erklären, ob man Remixe dann wirklich so gut unterscheiden kann. Aber bei Google, da gibt's ja noch die Seite, wo man Memes anklicken kann, eine richtige Rubrik.

Bei Google gibt es eine Memes-Rubrik?
Ja, da kann man richtig draufklicken. Memes. Das heißt, irgendwas muss doch da dran sein, dass man solche Memes erkennt!

Kritiker befürchten, dass durch die Reform jede Menge Inhalte zu Unrecht blockiert werden.
Aber dafür haben wir ja einen Beschwerdemechanismus vorgesehen.

"Ich sage nicht, dass das alles perfekt funktioniert"

So etwas Ähnliches gibt es schon bei YouTube. Dort können sich Nutzer beschweren, wenn ein Video fälschlicherweise blockiert wurde. Oft müssen sie tagelang auf eine Reaktion warten.
Wie das technisch abläuft, wird eine Frage für die Entwickler sein. Schauen Sie, was die Kritiker nicht sehen: Wenn wir nichts machen, legt der Europäische Gerichtshof was vor, und zwar eindeutig zugunsten der Rechteinhaber. Da ist dann nichts mit Abstufungen und Ausnahmen.
Ich sage ja auch nicht, dass das alles von Anfang an perfekt funktioniert. Aber diese emotional geführte Kampagne von einigen YouTubern entspricht einfach nicht der Wirklichkeit.

Viele YouTuber kommentieren in ihren Videos aktuelle Ereignisse und nutzen dabei jede Menge Videoschnipsel. Kann es solche Videos mit Artikel 13 überhaupt noch geben?
Wie ist das denn genau? Ist das ein Blogger, der Videos aus den Tagesthemen auf YouTube einbaut? Das müsste man sich dann nochmal ganz genau anschauen.

Welche YouTube-Kanäle haben Sie denn abonniert?
Keinen.

Aber trotzdem: Wenn man da Follower hat, Hunderttausende, Millionen, dann muss man sich fragen, müsste ich mich denn eigentlich nicht um Rechte kümmern? Da muss YouTube denen sagen, dass sie die einholen müssen. Wenn ich da einen YouTuber sehe, der vielleicht 30 Angestellte hat, dann muss er sich die Rechte einholen, wenn er etwas unbedingt zeigen muss.

Anti-Artikel-13-Proteste
Viele Protestierende stellen infrage, ob Axel Voss das Internet verstanden hat, wie etwa hier bei einer Demonstration in Berlin am 2. März 2019 | Bild: imago | Christian Spicker

Was halten Sie von den Demonstrationen, bei denen vor allem junge Menschen gegen die geplante Reform protestieren?
Es wurde ja schon im Sommer zweimal vergeblich zu Protesten aufgerufen. Da waren ja immer nur so'n paar Leute da.

"Kommen Sie mal zu mir ins Europäische Parlament"

Inzwischen sind es Tausende. Zu den Demonstrationen rufen YouTuber, Twitch-Streamer und Bündnisse auf.
Ja, die denken immer, dass sie so frei in ihrer Meinungsbildung sind, aber das sind sie gar nicht.

Sind sie nicht?
Ich spreche jetzt von der Sommerkampagne. Die wurde ja gesteuert von den großen Plattformen, wie man nun festgestellt hat.

Was genau meinen Sie mit gesteuert?
Dass die so Tools zur Verfügung stellen, damit das Ganze los rollt und dann Wörter wie "Zensurmaschine" und "Uploadblocker" in Umlauf kommen.

Welche Belege haben Sie dafür?
Das wurde alles mal analysiert, und zwar von einem Herr Rieck in der FAZ.

[Anmerkung der Redaktion: Der Autor des FAZ-Textes, Volker Rieck, ist kein Journalist, sondern Geschäftsführer des Content-Protection-Dienstleisters "FDS File Defense Service", der für zahlreiche Rechteinhaber tätig ist.]

Sie gehen davon aus, die Plattformen stecken hinter dem Protest gegen die EU-Urheberrechtsreform?
Ja, lesen Sie das mal! Sie werden feststellen, dass ein Großteil der Proteste aus den USA initiiert ist.

Denken Sie auch, dass die Online-Proteste zu Artikel 13 von Bots gesteuert wurden, wie Ihr Parteikollege Herr Schulze in einem Tweet geäußert hat?
Ganz von der Hand zu weisen ist das nicht. Wenn man mal zurückfragt, dann kommt da nix mehr. Da kommt keine Antwort.

Können Sie das kurz erklären?
Na, bei den E-Mails, die wir bekommen. Da kommt nichts zurück, wenn man die Adressen anschreibt. So können wir nicht sagen, ob hinter jeder E-Mail auch ein Mensch steht.

Proteste gegen Artikel 13
Für Protestierende ist Axel Voss nun selbst ein Meme | Bild: imago | IPON

Einige Menschen haben durchaus Ihre E-Mail-Antwort erhalten und an den Rechtsanwalt Christian Solmecke weitergeleitet. Solmecke berichtet auf YouTube über Artikel 13. In einem Video zeigt er, welche E-Mails Sie an die Wählerschaft geschickt haben. Darin sprechen sie von "Lügen", "Desinformation" und "Fake News" über Artikel 13. Halten Sie einen Trump-Begriff wie Fake News für angemessen?
Naja, wie würden Sie denn solche mutwillig falsch verbreiteten Behauptungen bezeichnen?

Ich höre ständig nur ideologische Gründe gegen Filter. Dabei existiert das alles ja schon. Jetzt wird plötzlich so getan, als sei das alles neu und ganz schlimm. Das ist wissentlich falsch! Kommen Sie mal zu mir ins Europäische Parlament und versuchen Sie dann mal, dagegenzuhalten. Da kommt nix bei rum.

"Lasst uns doch Filmchen bis zu 30 Sekunden kostenfrei machen"

Sie haben auch von "Lügen" gesprochen. Welche Lügen werden über Artikel 13 verbreitet?
Wikipedia ist ein ganz extremes Beispiel dafür. Die sind doch als Enzyklopädie explizit von der Haftung ausgeschlossen. Das hab ich dem guten Wikipedia-Gründer Jimmy Wales doch selbst gesagt! Trotzdem wird von ihrer Seite so getan, als sei das Gegenteil der Fall. Woher hat der seine Infos?

Wikipedia befürchtet, auch selbst von möglichen Uploadfiltern betroffen zu sein. Dort kann bislang nämlich jeder Nutzer Bilder, Videos und Tondateien hochladen und dadurch die Enzyklopädie erweitern und aktuell halten. Könnten Uploadfilter dadurch nicht die Zukunft des freien Internet-Lexikons bedrohen?
Ich weiß das nicht mehr so im Detail, das ist alles so rasant und schnelllebig.

Wikimedia hat Ihnen einen offenen Brief geschrieben mit der Bitte, die Uploadfilter-Regelung zu verhindern. Den haben ein gutes Dutzend Medienexperten und Technikverbände unterzeichnet.
Ja, aber wie soll man es denn sonst machen? Auch jetzt kommt immer nur die Idee mit Notice und Takedown …

… so nennt man die aktuelle Regelung, in der Plattformen rechtswidrige Inhalte erst löschen müssen, wenn sie darüber informiert wurden …
Filmschaffende sagen aber: "Nein, meine Sachen sollen gar nicht hochgeladen werden." Was also tun? Also hab ich nach dem Sommer den Vorschlag gemacht, dass wir noch die Plattform-Haftung in die Reform mit reinnehmen.

Es gibt durchaus alternative Vorschläge für ein neues Urheberrecht, zum Beispiel Fair Use, was Nutzerinnen und Nutzern deutlich mehr Rechte einräumt, wenn sie aus urheberrechtlich geschütztem Material etwas Eigenes erschaffen. Was halten Sie davon?
Da könnte man mal drüber nachdenken. Aber das passt nicht so richtig in unser System. Ich hab ja auch schon mal ganz andere Ideen ins Parlament reingebracht, zum Beispiel: Kommt, lasst uns doch Filmchen bis zu 30 Sekunden kostenfrei machen! Aber dafür krieg ich da keine Mehrheit. Meine Mitarbeiter sagen schon immer, wir haben dir doch so viele schöne Ideen mitgegeben. Aber da bin ich nur Mittler und kann mich ja immer nur im Rahmen der Möglichkeiten bewegen.

Danke für Ihre Zeit, Herr Voss.
Gerne. Und falls Sie eine Idee haben zum Urheberrecht – bitte immer her damit!

Folge Theresa auf Twitter und VICE auf Facebook, Instagram und Snapchat

Mehr VICE
VICE-Kanäle
 
#####EOF##### The Internet of Things Sucks So Bad Even ‘Amateurish’ Malware Is Enough - Motherboard

The Internet of Things Sucks So Bad Even ‘Amateurish’ Malware Is Enough

The malware that powered the “Botnet of Things” behind one of the largest cyberattacks ever isn’t even that great, and that’s exactly why we should be worried.

|
Oct 3 2016, 6:40pm

Image: McIek/Shutterstock

Over the last few weeks, unknown hackers have launched some of the largest cyberattacks the internet has ever seen. These attacks weren't notable just by their unprecedented size and power, but also because they were powered by a large zombie army of hacked cameras and other devices that fit into the category of Internet of Things, or IoT.

On Friday, the hacker who claims to have created the malware that was powering this massive "Botnet Of Things" published its source code, which appears to be legitimate.

"It looks like this release is the real deal," according to Marshal Webb, the chief technology officer of BackConnect, an anti-DDoS firm, who has been collecting samples of the malware in the last few weeks.

However legitimate, the malicious code isn't actually that sophisticated, according to security researchers who have been studying it.

Read more: The Internet of Things Will Turn Large-Scale Hacks into Real World Disasters

"Whoever originally wrote it clearly put some thought into it. Like, it's better than most of the shit out there hitting IoT," Darren Martyn, a security researcher who has been analyzing the malware told Motherboard in an online chat. "[But] it's still fairly amateurish."

The malware, known as Mirai, was dumped on Hackforums by its alleged author and later published by others on GitHub. Mirai is designed to scan the internet for vulnerable internet-connected devices that use the telnet protocol and have weak default logins and passwords such as "admin" and "123456", "root" and "password", and even "mother" and "fucker," which are credentials used by another botnet made of hacked routers.

A sample of username and password combinations that the malware is programmed to try in order to hack into its targets.

Once the malware finds one of these devices, which are usually surveillance cameras, DVRs or routers, it infects them and self-propagates. This gives the malware operators full control over the hacked devices and allows them to launch DDoS attacks, such as the ones that hit the website of noted journalist Brian Krebs and hosting provider OVH, using various sources of traffic like UDP, DNS, HTTP floods, as well as GRE IP and GRE Ethernet floods.

The malware is clearly designed to be used as a DDoS-for-hire service, as indicated by the code strings that say "Sharing access IS prohibited! [...] Do NOT share your credentials!"

The code is full of inside jokes and funny tidbits, such as several mentions of the world "memes," and even a YouTube link that turns out to point to Rick Astley video "Never Gonna Give You Up"—the once-ubiquitous internet meme known as "Rickrolling." All these are likely a way for the author or authors to poke fun at whoever is looking at the code, including security researchers and law enforcement authorities.

"It's better than most of the shit out there hitting IoT, [but] it's still fairly amateurish."

Some researchers noted that the code as it is needs some tweaking before being launched. As the security researcher MalwareTech put it in a chat, the DDoS command "will just print a bunch of hacker sounding bullshit to the console and not actually do anything"—perhaps another inside joke.

Martyn said that whoever wants to use the malware needs to change some configurations and do some setting up, but "anyone with a sense of clue could set it up in around 30 minutes."

Interestingly, some comments in parts of the malware code are in are in cyrillic script, hinting that one of the authors or developers is from Eastern Europe.

Despite being anything but Stuxnet or any other sophisticated malware, it still works, and now that is available for all to use, it is actively spreading.

If mediocre malware can power some of the largest DDoS attacks ever, and considering the sad state of security of the Internet of Things in general, we should probably brace for more cyberattacks powered by our easy-to-hack "smart" Internet of Things, as many, including ourselves, had predicted months ago.

"I am just surprised at how such a trivial attack code could be responsible for such a large DDoS. It really says a lot more about the state of IoT security than the specifics of the malware," a security researcher that goes by the name Hacker Fantastic told Motherboard. "If people still aren't changing default passwords and disabling telnet on Internet connected equipment in 2016 then we are heading to a future with more incidents like this happening."

Correction: a previous version of this story stated that the username-password combination "mother" and "fucker" was likely a joke by the malware authors. In reality, those credentials are used by a worm that infects routers and sets those credentials as passwords and usernames with the goal of creating a botnet.

Get six of our favorite Motherboard stories every day by signing up for our newsletter.

 
#####EOF##### What Is a 'Supply Chain Attack?' - Motherboard

What Is a 'Supply Chain Attack?'

A dangerous threat that takes advantage of the inherent trust between users and their software providers is a growing trend.

|
Sep 29 2017, 4:00pm

Image: Shutterstock

Most of us trust software makers to update their products with new functionality or security fixes, but have you ever considered that one of those updates could one day compromise your entire digital life? Well, hackers have.

Online banking trojans that steal credentials from users' computers used to be all the rage in the cybercriminal world a decade ago, but then banks implemented two-factor authentication schemes and many attackers now prefer to hack into financial institutions directly. Similarly, attackers used to inject software exploits into popular websites, but after software developers added anti-exploit technologies to their applications, hackers started attacking developers directly.

Attackers always try to choose the path of least resistance, but if that gets blocked, they adapt and find the next best way to reach their goal, even if it takes a bit more effort. It seems that we're now entering the age of software supply chain attacks, a dangerous threat that takes advantage of the inherent trust between computer users and their software providers. And it's not an easy problem to fix.

Supply chain attacks can happen when hackers gain access to a software company's infrastructure—development environment, build servers, update servers, etc.—and are able to inject malware into new software releases or security updates. This results in users downloading malware through the company's official software distribution channels, which they've come to trust.

Supply chain attacks are not a new idea and security experts have long warned about the possibility of software getting compromised before being delivered to customers by vendors or their partners. But while there have been examples of such attacks over the years, ranging from simple replacement of downloads on compromised vendor websites to sophisticated cyberespionage operations, the incidents have remained fairly isolated; until now.

This year there've been at least five high-profile cases where hackers broke into the IT infrastructure of software providers and added malware to programs trusted by large numbers of users. Security experts agree that it's a growing trend that culminated recently with an attack that resulted in infected versions of CCleaner—a Windows system optimization tool—being delivered to over 2.2 million users.

It's true that many software supply chain compromises so far, including the recent CCleaner incident, have targeted corporations and were likely perpetrated by sophisticated cyberespionage groups with possible ties to nation states. But there were plenty of attacks that have affected consumers as well and which fit nicely into the supply chain category.

How do supply chain attacks happen?

There are many points of a supply chain that attackers can target. For example, the US National Security Agency reportedly engages in physical attacks called supply chain interdiction that involve intercepting legitimate shipments of computers or other devices, inserting backdoors into them, and delivering them to the intended recipients. This is done without the knowledge of the device manufacturers.

Like in the CCleaner case, attackers can also break into the development infrastructure of software vendors and add their malicious code to applications before they're compiled and released to the public. These breaches usually involve compromising an employee's computer through spear-phishing—targeted email-based attacks—or some other method and then moving laterally through the internal network from system to system, exploiting vulnerabilities and collecting credentials until access is gained to critical systems.

Pre-software-release compromises are very dangerous because the resulting packages are signed with their creator's digital identity and can bypass application whitelisting technologies. It's almost impossible to tell that something's wrong with them, at least for regular users.

A simpler supply chain attack is when attackers only manage to compromise the Internet accessible web servers that a vendor uses to distribute software updates or new releases. In this case they can only replace the legitimate files with modified ones that contain malware. Such modifications are theoretically detectable because they break digital signatures—if those programs are digitally signed. But there are plenty of programs out there that don't validate their own updates by checking digital signatures.

In February, Microsoft reported a supply chain attack against technology and financial organizations where attackers compromised the update servers for an unnamed third-party editing tool. The hackers used their access to deliver an unsigned malware executable as an update for the tool, which the program then downloaded and executed.

Not all programs download their updates as stand-alone files, Michael Gorelik, vice-president of research and development at security firm Morphisec, told me. Some updates are delivered as chunks of code that are loaded and executed by applications directly in memory and that code is not typically signed, he said.

There are also many applications that don't receive their updates over secure encrypted channels like HTTPS. This exposes them to man-in-the-middle attacks. Hackers in a position to intercept internet traffic between users' computers and the update servers for such apps—for example over insecure Wi-Fi networks or through compromised routers—can simply send malicious updates to those computers without needing to compromise the vendor's servers. This is another reason why it's important for software to validate updates by checking digital signatures.

There are also supply chain attacks that happen with the knowledge of software developers, or at least the developers who control the software at a particular point in time. Companies and software products are being bought and sold frequently and the changes in ownership are not always transparent to end users. There have been cases where the new owners of an application decided to include malware or adware in new versions.

In 2014, before Google tightened its rules for Google Chrome extensions, there were several incidents where extensions were bought from their original developers for four-figure sums and were then modified to steal browsing data or display intrusive ads. A similar thing happened recently with a WordPress plug-in and even though WordPress is not a desktop application, the concept was the same.

Supply chain compromises can also happen through third-party code that developers decide to use in their projects. Modern applications contain numerous third-party libraries, frameworks and advertising SDKs (software development kits). If any of these components gets compromised, the malicious code could spread to thousands of other programs due to such integrations.

Security researchers from Check Point Software Technologies recently found around 50 malware-infected Android applications hosted on Google Play that had been downloaded millions of times. They determined that the malicious code was actually part of a third-party SDK that app developers had integrated into their apps.

There have also been cases where Android devices came with malicious applications preloaded in their firmware. This is a very potent type of supply chain attack because preinstalled applications often have system privileges and cannot be uninstalled by users or even antivirus programs running on the device. Mobile antivirus programs have the same privileges as regular apps, so they cannot remove system applications that were already part of the firmware.

There's no simple defense

"Supply chain attacks are almost impossible to detect by regular consumers because of their complexity," Bogdan Botezatu, a senior analyst at antivirus vendor Bitdefender, told me. "Depending on the security solution installed on the victim's machine, an attack could be stopped or not. Supply chain attacks that target hardware vendors though, are impossible to detect because malicious firmware can compromise the operating system or the locally installed security solutions."

Companies have more options to defend themselves because they can—and should—carefully choose the software vendors they decide to work with based on their security track record. Before signing contracts, they can ask suppliers to share the results of their periodic network security audits and can inquire about their internal security practices.

Many supply chain attacks use memory injection techniques where malicious code is directly loaded in the memory of compromised processes and doesn't create files or leave other digital traces on disks. Not all endpoint security solutions are equipped to detect such fileless malware threats, but there are some enterprise products that can. In general, companies have access to better security solutions and technologies than consumers.

Ultimately it is the software developers themselves that need to have strong internal auditing and code review practices in place in order to ensure that the products they release perform as originally intended, Botezatu said.

Developers are an attractive target

The rise in supply chain attacks is directly correlated with an increase in the number of attacks against developers and systems engineers because these individuals typically have credentials on their computers that can provide privileged access to the development and IT infrastructure of their employers.

In March, a group of hackers launched phishing attacks against developers with accounts on GitHub. The goal was to infect their computers with a malware program that could log keystrokes, take screenshots and interact with authentication smartcards attached to their computers.

In 2013, a group of hackers compromised a popular iOS development forum and injected an exploit for an unpatched Java vulnerability into its pages. The exploit infected visitors' computers with spying malware and affected developers from many large companies, including Twitter, Facebook and Apple.

Since supply chain attacks offer a very efficient way to bypass traditional defenses and compromise a large number of computers, more and more hackers are likely to adopt this attack method going forward. The recent CCleaner attack was used to deploy additional specialized malware on 40 computers belonging to 12 technology companies including Sony, Intel, VMware, Samsung and Asus. There's a possibility the hackers might have intended to further compromise those companies' networks and systems in order to execute additional supply chain attacks through their own products.

Some security researchers are convinced there are already other software programs out there—unrelated to the CCleaner hack—that have been compromised due to supply chain chain attacks, but which have yet to be discovered. This means malware might be running right now on users' computers thanks to a legitimate application or update they've downloaded from a trusted developer.

Welcome to the era of supply chain attacks.

 
#####EOF##### DARPA Is Building a $10 Million, Open Source, Secure Voting System - Motherboard

DARPA Is Building a $10 Million, Open Source, Secure Voting System

The system will be fully open source and designed with newly developed secure hardware to make the system not only impervious to certain kinds of hacking, but also allow voters to verify that their votes were recorded accurately.

|
Mar 14 2019, 4:02pm

Image: Shutterstock

For years security professionals and election integrity activists have been pushing voting machine vendors to build more secure and verifiable election systems, so voters and candidates can be assured election outcomes haven’t been manipulated.

Now they might finally get this thanks to a new $10 million contract the Defense Department’s Defense Advanced Research Projects Agency (DARPA) has launched to design and build a secure voting system that it hopes will be impervious to hacking.

The first-of-its-kind system will be designed by an Oregon-based firm called Galois, a longtime government contractor with experience in designing secure and verifiable systems. The system will use fully open source voting software, instead of the closed, proprietary software currently used in the vast majority of voting machines, which no one outside of voting machine testing labs can examine. More importantly, it will be built on secure open source hardware, made from secure designs and techniques developed over the last year as part of a special program at DARPA. The voting system will also be designed to create fully verifiable and transparent results so that voters don’t have to blindly trust that the machines and election officials delivered correct results.

But DARPA and Galois won’t be asking people to blindly trust that their voting systems are secure—as voting machine vendors currently do. Instead they’ll be publishing source code for the software online and bring prototypes of the systems to the Def Con Voting Village this summer and next, so that hackers and researchers will be able to freely examine the systems themselves and conduct penetration tests to gauge their security. They’ll also be working with a number of university teams over the next year to have them examine the systems in formal test environments.

“Def Con is great, but [hackers there] will not give us as much technical details as we want [about problems they find in the systems],” Linton Salmon, program manager in DARPA’s Microsystems Technology Office who is overseeing the project, said in a phone call. “Universities will give us more information. But we won’t have as many people or as high visibility when we do it with universities.”

The systems Galois designs won’t be available for sale. But the prototypes it creates will be available for existing voting machine vendors or others to freely adopt and customize without costly licensing fees or the millions of dollars it would take to research and develop a secure system from scratch.

“We will not have a voting system that we can deploy. That’s not what we do,” said Salmon. “We will show a methodology that could be used by others to build a voting system that is completely secure.”

Joe Kiniry is the principal scientist at Galois who is leading the project at his company. Kiniry has been involved in efforts to secure elections for years as part of a separate company he runs called Free & Fair. He’s consulted with foreign governments about their election systems, and his company has been working with states in the US to design robust post-election audits. But the idea to create a secure voting system didn’t come from Kiniry; it came from DARPA.

“DARPA was searching for a sexy demonstration for the [secure hardware] program. What could you put on secure hardware that people would care about and understand?” Kiniry said.

They needed a project that would be unclassified so DARPA could talk about it publicly.

“We wanted something where there could be a lot of people who could look at this in an open way and critique it and find problems,” said Salmon.

The project will leverage the hefty resources of DARPA and its considerable security experience, and if it works, it could help solve a pressing national problem around election security and integrity.

“If we were to build a fake radar system, it could demonstrate secure hardware, but it wouldn’t be useful to anybody. [DARPA] love the fact that we’re building a demonstrator that might actually be useful to the world,” Kiniry said.

Kiniry said Galois will design two basic voting machine types. The first will be a ballot-marking device that uses a touch-screen for voters to make their selections. That system won’t tabulate votes. Instead it will print out a paper ballot marked with the voter’s choices, so voters can review them before depositing them into an optical-scan machine that tabulates the votes. Galois will bring this system to Def Con this year.

Many current ballot-marking systems on the market today have been criticized by security professionals because they print bar codes on the ballot that the scanner can read instead of the human-readable portion voters review. Someone could subvert the bar code to say one thing, while the human-readable portion says something else. Kiniry said they’re aiming to design their system without barcodes.

The optical-scan system will print a receipt with a cryptographic representation of the voter’s choices. After the election, the cryptographic values for all ballots will be published on a web site, where voters can verify that their ballot and votes are among them.

“That receipt does not permit you to prove anything about how you voted, but does permit you to prove that the system accurately captured your intent and your vote is in the final tally,” Kiniry said.

Members of the public will also be able to use the cryptographic values to independently tally the votes to verify the election results so that tabulating the votes isn't a closed process solely in the hands of election officials.

“Any organization [interested in verifying the election results] that hires a moderately smart software engineer [can] write their own tabulator,” Kiniry said. “We fully expect that Common Cause, League of Women Voters and the [political parties] will all have their own tabulators and verifiers.”

The second system Galois plans to build is an optical-scan system that reads paper ballots marked by voters by hand. They’ll bring that system to Def Con next year.

*

The voting system project grew out of a larger DARPA program focused on developing secure hardware. That program, called System Security Integrated Through Hardware and Firmware or SSITH, was launched in 2017 and is aimed at developing secure hardware, and design tools to build that hardware, so that hardware would be impervious to most of the software attacks prevalent today.

Currently most security is focused on software protections for operating systems, browsers, and other programs.

“This is only the beginning. This is a problem that is so big that one DARPA program isn’t going to solve even 20 percent of the problem.”

“In general, software has been the way people try to solve the problems because software is adaptable,” Salmon noted. There are some hardware security solutions already, he said, "but they don’t go far enough and … require too much power and performance….We want to fix this in hardware, and then no matter what [vulnerabilities] you have in software, [attackers] would not be able to [exploit] them.”

The basic problem, he said, is that most hardware is gullible and has no way of distinguishing between acceptable and unacceptable behavior. If an attacker’s exploit tells the machine to do something malicious, the hardware complies without making judgments about whether it should do this.

“I’m trying to change that and make hardware part of the solution to security rather than a bystander,” said Salmon. “This is only the beginning. This is a problem that is so big that one DARPA program isn’t going to solve even 20 percent of the problem.”

In a voting system, this means the hardware would prevent, for example, someone entering a voting booth and slipping a malicious memory card into the system and tricking the system into recording 20 votes for one vote cast, as researchers have shown could be done with some voting systems.

“Our goal is to make this so that the hardware is blocked against all of these various types of attack from the external world. If this is successful, and if the software put on top is equally successful, then it means people can’t hack in and … alter votes. It would also mean that the person who votes would get some verification that they did vote and all of that would be done in a manner that hackers couldn’t change,” Salmon said.

The DARPA secure hardware program involves six teams from several universities as well as Lockheed Martin. Each team was tasked with creating three secure CPU designs. Galois, which is part of the SSITH project, plans to build its voting system on top of the secure hardware designed by these teams, and will create a prototype for each CPU design.

“It’s normal, open source voting system software, which just happens to be running on top of those secure CPUs,” said Kiniry. “Our contention is… that a normal voting system running on COTS [commercial off-the-shelf hardware] will be hacked. A normal voting system running on the secure hardware will probably not be hacked.”

Not only are teams developing secure CPUs but to best take advantage of what a secure CPU offers, they’re developing new versions of open source C-compilers so they can recompile the entire software stack on a system—the operating system, the kernel, all the libraries and all the user software that’s written in C.

“So it really is a powerful software play and hardware play,” Kiniry said.

The program isn’t about re-architecting new CPUs, but proving that existing hardware can be modified to be made secure, thereby avoiding the need to re-design hardware from scratch.

“Galois and DARPA have just stepped up and filled a vacuum of leadership at the federal level to address the well-documented vulnerabilities in US voting machines that constitute a national security crisis.”

But even so, the secure designs are expected to change how new CPUs are architected going forward.

Joe Fitzpatrick, a noted hardware security expert who trains professionals on hardware hacking and security, calls the DARPA secure hardware project a lofty goal that will be great if it succeeds.

“I can’t tell if they truly are architecting a new CPU that is truly resistant to all these [attacks]. But if they designed a new CPU that was able to understand and determine malicious or correct operations from the software, that’s not trivial. That would be pretty amazing,” said Fitzpatrick.

Peiter “Mudge” Zatko, a former program manager at DARPA and noted security professional who has testified to Congress on security issues, said this and other DARPA projects are beneficial because they usually spawn new solutions. But he cautions that CPUs modified for security won’t solve all security problems.

“We should [also] work towards building processors that have more security principles inherent in them,” he told Motherboard.

Susan Greenhalgh, policy director for the National Election Defense Coalition, an election integrity group, hopes the systems Galois and DARPA are building will finally change the status quo of insecure voting.

“The [current systems are] woefully equipped and too prosaic to drive the quantum changes needed to face the nation-state actors that are threatening our democracy,” she told Motherboard. “Galois and DARPA have just stepped up and filled a vacuum of leadership at the federal level to address the well-documented vulnerabilities in US voting machines that constitute a national security crisis.”

 
#####EOF##### Hackers Hijacked ASUS Software Updates to Install Backdoors on Thousands of Computers - Motherboard
Advertisement

Hackers Hijacked ASUS Software Updates to Install Backdoors on Thousands of Computers

The Taiwan-based tech giant ASUS is believed to have pushed the malware to hundreds of thousands of customers through its trusted automatic software update tool after attackers compromised the company’s server and used it to push the malware to machines.

|
Mar 25 2019, 1:00pm

Image: Shutterstock

Advertisement

Researchers at cybersecurity firm Kaspersky Lab say that ASUS, one of the world’s largest computer makers, was used to unwittingly install a malicious backdoor on thousands of its customers’ computers last year after attackers compromised a server for the company’s live software update tool. The malicious file was signed with legitimate ASUS digital certificates to make it appear to be an authentic software update from the company, Kaspersky Lab says.

ASUS, a multi-billion dollar computer hardware company based in Taiwan that manufactures desktop computers, laptops, mobile phones, smart home systems, and other electronics, was pushing the backdoor to customers for at least five months last year before it was discovered, according to new research from the Moscow-based security firm.

The researchers estimate half a million Windows machines received the malicious backdoor through the ASUS update server, although the attackers appear to have been targeting only about 600 of those systems. The malware searched for targeted systems through their unique MAC addresses. Once on a system, if it found one of these targeted addresses, the malware reached out to a command-and-control server the attackers operated, which then installed additional malware on those machines.

Kaspersky Lab said it uncovered the attack in January after adding a new supply-chain detection technology to its scanning tool to catch anomalous code fragments hidden in legitimate code or catch code that is hijacking normal operations on a machine. The company plans to release a full technical paper and presentation about the ASUS attack, which it has dubbed ShadowHammer, next month at its Security Analyst Summit in Singapore. In the meantime, Kaspersky has published some of the technical details on its website.

Advertisement

“We saw the updates come down from the Live Update ASUS server. They were trojanized, or malicious updates, and they were signed by ASUS."

The issue highlights the growing threat from so-called supply-chain attacks, where malicious software or components get installed on systems as they’re manufactured or assembled, or afterward via trusted vendor channels. Last year the US launched a supply chain task force to examine the issue after a number of supply-chain attacks were uncovered in recent years. Although most attention on supply-chain attacks focuses on the potential for malicious implants to be added to hardware or software during manufacturing, vendor software updates are an ideal way for attackers to deliver malware to systems after they’re sold, because customers trust vendor updates, especially if they’re signed with a vendor’s legitimate digital certificate.

“This attack shows that the trust model we are using based on known vendor names and validation of digital signatures cannot guarantee that you are safe from malware,” said Vitaly Kamluk, Asia-Pacific director of Kaspersky Lab’s Global Research and Analysis Team who led the research. He noted that ASUS denied to Kaspersky that its server was compromised and that the malware came from its network when the researchers contacted the company in January. But the download path for the malware samples Kaspersky collected leads directly back to the ASUS server, Kamluk said.

Advertisement

Motherboard sent ASUS a list of the claims made by Kaspersky in three separate emails on Thursday but has not heard back from the company.

Read more: What Is a 'Supply Chain Attack?'

But the US-based security firm Symantec confirmed the Kaspersky findings on Friday after being asked by Motherboard to see if any of its customers also received the malicious download. The company is still investigating the matter but said in a phone call that at least 13,000 computers belonging to Symantec customers were infected with the malicious software update from ASUS last year.

“We saw the updates come down from the Live Update ASUS server. They were trojanized, or malicious updates, and they were signed by ASUS,” said Liam O’Murchu, director of development for the Security Technology and Response group at Symantec.

Advertisement

This is not the first time attackers have used trusted software updates to infect systems. The infamous Flame spy tool, developed by some of the same attackers behind Stuxnet, was the first known attack to trick users in this way by hijacking the Microsoft Windows updating tool on machines to infect computers. Flame, discovered in 2012, was signed with an unauthorized Microsoft certificate that attackers tricked Microsoft’s system into issuing to them. The attackers in that case did not actually compromise Microsoft’s update server to deliver Flame. Instead, they were able to redirect the software update tool on the machines of targeted customers so that they contacted a malicious server the attackers controlled instead of the legitimate Microsoft update server.

Two different attacks discovered in 2017 also compromised trusted software updates. One involved the computer security cleanup tool known as CCleaner that was delivering malware to customers via a software update. More than 2 million customers received that malicious update before it was discovered. The other incident involved the infamous notPetya attack that began in Ukraine and infected machines via a malicious update to an accounting software package.

Costin Raiu, company-wide director of Kaspersky’s Global Research and Analysis Team, said the ASUS attack is different from these others. “I’d say this attack stands out from previous ones while being one level up in complexity and stealthiness. The filtering of targets in a surgical manner by their MAC addresses is one of the reasons it stayed undetected for so long. If you are not a target, the malware is virtually silent,” he told Motherboard.

But even if silent on non-targeted systems, the malware still gave the attackers a backdoor into every infected ASUS system.

Tony Sager, senior vice president at the Center for Internet Security who did defensive vulnerability analysis for the NSA for years, said the method the attackers chose to target specific computers is odd.

Advertisement

“Supply chain attacks are in the ‘big deal’ category and are a sign of someone who is careful about this and has done some planning,” he told Motherboard in a phone call. “But putting something out that hits tens of thousands of targets when you’re really going only after a few is really going after something with a hammer.”

Kaspersky researchers first detected the malware on a customer’s machine on January 29. After they created a signature to find the malicious update file on other customer systems, they discovered that more than 57,000 Kaspersky customers had been infected with it. That victim toll only accounts for Kaspersky customers, however. Kamluk said the real number is likely in the hundreds of thousands.

Most of the infected machines belonging to Kaspersky customers (about 18 percent) were in Russia, followed by fewer numbers in Germany and France. Only about 5 percent of infected Kaspersky customers were in the United States. Symantec’s O’Murchu said that about 15 percent of the 13,000 machines belonging to his company’s infected customers were in the U.S.

Kamluk said Kaspersky notified ASUS of the problem on January 31, and a Kaspersky employee met with ASUS in person on February 14. But he said the company has been largely unresponsive since then and has not notified ASUS customers about the issue.

The attackers used two different ASUS digital certificates to sign their malware. The first expired in mid-2018, so the attackers then switched to a second legitimate ASUS certificate to sign their malware after this.

Advertisement

Kamluk said ASUS continued to use one of the compromised certificates to sign its own files for at least a month after Kaspersky notified the company of the problem, though it has since stopped. But Kamluk said ASUS has still not invalidated the two compromised certificates, which means the attackers or anyone else with access to the un-expired certificate could still sign malicious files with it, and machines would view those files as legitimate ASUS files.

This wouldn't be the first time ASUS was accused of compromising the security of its customers. In 2016, the company was charged by the Federal Trade Commission with misrepresentation and unfair security practices over multiple vulnerabilities in its routers, cloud back-up storage and firmware update tool that would have allowed attackers to gain access to customer files and router log-in credentials, among other things. The FTC claimed ASUS knew about those vulnerabilities for at least a year before fixing them and notifying customers, putting nearly a million US router owners at risk of attack. ASUS settled the case by agreeing to establish and maintain a comprehensive security program that would be subject to independent audit for 20 years.

The ASUS live update tool that delivered malware to customers last year is installed at the factory on ASUS laptops and other devices. When users enable it, the tool contacts the ASUS update server periodically to see if any firmware or other software updates are available.

“They wanted to get into very specific targets and they already knew in advance their network card MAC address, which is quite interesting.”

Advertisement

The malicious file pushed to customer machines through the tool was called setup.exe, and purported to be an update to the update tool itself. It was actually a three-year-old ASUS update file from 2015 that the attackers injected with malicious code before signing it with a legitimate ASUS certificate. The attackers appear to have pushed it out to users between June and November 2018, according to Kaspersky Lab. Kamluk said the use of an old binary with a current certificate suggests the attackers had access to the server where ASUS signs its files but not the actual build server where it compiles new ones. Because the attackers used the same ASUS binary each time, it suggests they didn’t have access to the whole ASUS infrastructure, just part of the signing infrastructure, Kamluk notes. Legitimate ASUS software updates still got pushed to customers during the period the malware was being pushed out, but these legitimate updates were signed with a different certificate that used enhanced validation protection, Kamluk said, making it more difficult to spoof.

The Kaspersky researchers collected more than 200 samples of the malicious file from customer machines, which is how they discovered the attack was multi-staged and targeted.

Buried in those malicious samples were hard-coded MD5 hash values that turned out to be unique MAC addresses for network adapter cards. MD5 is an algorithm that creates a cryptographic representation or value for data that is run through the algorithm. Every network card has a unique ID or address assigned by the manufacturer of the card, and the attackers created a hash of each MAC address it was seeking before hard-coding those hashes into their malicious file, to make it more difficult to see what the malware was doing. The malware had 600 unique MAC addresses it was seeking, though the actual number of targeted customers may be larger than this. Kaspersky can only see the MAC addresses that were hard-coded into the particular malware samples found on its customers’ machines.

Image: Shutterstock

The Kaspersky researchers were able to crack most of the hashes they found to determine the MAC addresses, which helped them identify what network cards the victims had installed on their machines, but not the victims themselves. Any time the malware infected a machine, it collected the MAC address from that machine’s network card, hashed it, and compared that hash against the ones hard-coded in the malware. If it found a match to any of the 600 targeted addresses, the malware reached out to asushotfix.com, a site masquerading as a legitimate ASUS site, to fetch a second-stage backdoor that it downloaded to that system. Because only a small number of machines contacted the command-and-control server, this helped the malware stay under the radar.

Advertisement

“They were not trying to target as many users as possible,” said Kamluk. “They wanted to get into very specific targets and they already knew in advance their network card MAC address, which is quite interesting.”

Symantec’s O’Murchu said he’s not sure yet if any of his company’s customers were among those whose MAC addresses were on the target list and received the second-stage backdoor.

The command-and-control server that delivered the second-stage backdoor was registered May 3 last year but was shut down in November before Kaspersky discovered the attack. Because of this, the researchers were unable to obtain a copy of the second-stage backdoor pushed out to victims or identify victim machines that had contacted that server. Kaspersky believes at least one of its customers in Russia got infected with the second-stage backdoor when his machine contacted the command-and-control server on October 29 last year, but Raiu says the company doesn’t know the identity of the machine’s owner in order to contact him and investigate further.

There were early hints that a signed and malicious ASUS update was being pushed to users in June 2018, when a number of people posted comments in a Reddit forum about a suspicious ASUS alert that popped up on their machines for a “critical” update. “ASUS strongly recommends that you install these updates now,” the alert warned.

Advertisement

In a post titled “ASUSFourceUpdater.exe is trying to do some mystery update, but it won't say what,” a user named GreyWolfx wrote, “I got an update popup from a .exe that I had never seen before today….I’m just curious if anyone knows what this update would possibly be for?”

When he and other users clicked on their ASUS updater tool to get information about the update, the tool showed no recent updates had been issued from ASUS. But because the file was digitally signed with an ASUS certificate and because scans of the file on the VirusTotal web site indicated it was not malicious, many accepted the update as legitimate and downloaded it to their machines. VirusTotal is a site that aggregates dozens of antivirus programs; users can upload suspicious files to the site to see if any of the tools detect it as malicious.

“I uploaded the executable [to VirusTotal] and it comes back as a validly signed file without issue,” one user wrote. “The spelling of 'force' and the empty details window are indeed odd, but I noticed odd grammar errors in other ASUS software installed on this system, so it's not a smoking gun by itself,” he noted.

Advertisement

Kamluk and Raiu said this may not be the first time the ShadowHammer attackers have struck. They said they found similarities between the ASUS attack and ones previously conducted by a group dubbed ShadowPad by Kaspersky. ShadowPad targeted a Korean company that makes enterprise software for administering servers; the same group was also linked to the CCleaner attack. Although millions of machines were infected with the malicious CCleaner software update, only a subset of these got targeted with a second stage backdoor, similar to the ASUS victims. Notably, ASUS systems themselves were on the targeted CCleaner list.

The Kaspersky researchers believe the ShadowHammer attackers were behind the ShadowPad and CCleaner attacks and obtained access to the ASUS servers through the latter attack.

“ASUS was one of the primary targets of the CCleaner attack,” Raiu said. “One of the possibilities we are taking into account is that’s how they intially got into the ASUS network and then later through persistence they managed to leverage the access … to launch the ASUS attack.”

Listen to CYBER, Motherboard’s new weekly podcast about hacking and cybersecurity.

More from motherboard
#####EOF##### 700 bliss is a noise project that will save you - i-D

700 bliss is a noise project that will save you

Moor Mother and DJ Haram premiere the video for “Cosmic Slop,” a track from their cathartic, essential noise project.

|
Mar 28 2018, 6:23pm

Collaborative noise project 700 Bliss was born at Philly parties and underground club nights. Musician/poet Moor Mother (Camae Ayewa) and producer DJ Haram were friends before they were musical collaborators. The two linked up through curating and playing shows in the 215. Camae and Haram’s experimental noise project now has an EP, Spa 700. While it’s been some years in the making, the release might be the most essential listening of 2018. Spa 700 is a cleansing experience achieved not by bathing you in lightness, but by drowning you in spaced-out commotion. Today, i-D premieres the video for “Cosmic Slop,” a warped journey through delirious beats and America’s dark history.

“Haram sent me this track a long time ago. I must have wrote it in like 20 minutes after I got it,” Camae Ayewa tells i-D. “It was one of the easiest tracks to write.” Haram agrees, adding, “‘Cosmic Slop’ was pretty straightforward, beat plus rap. The creation process is a lot of bedroom studio time and getting on the same page.” The video, by Eva Wǒ and Qué Pequeño, was an equally intuitive process for the two frank Philly femmes. It shows Haram and Camae’s friends kicking back in a messy basement and dancing to a cathartic live set. “We just wanted to shoot a video with our friends,” Camae says. “That’s it. There was no agenda.”

It’s not shocking that “Cosmic Slop” was quick to write. Camae has spent years spitting about black American history, harnessing Afrofuturism, and political protest on her essential 2016 album Fetish Bones. Here, her passion-drenched words about rage and survival spill over Haram’s celestial, strung-out club beats. “You late, bitch/It’s the end of the world, you bait bitch.” As Haram’s production intensifies and closes in around the vocals, the words are seared into your brain. Images of both the abstract and historical arise: spaceships, motherships, slave ships. Watch the video below and get the most rigorous cleanse of your life via 700 Bliss’s Bandcamp.

 
#####EOF##### Vice Magazine - VICE

VICE Magazine

The Truth and Lies Issue

Instagram and Snapchat Are Ruining Our Memories

Documenting our lives for Snapchat and Instagram can decrease the likelihood of retaining those moments as a significant memory.
Eda Yu
3 days ago
The Truth and Lies Issue

How to Manipulate Your Way into Power, in Four Complicated Steps

Grigori Yefimovich Rasputin has been dead for over 100 years, but his tactics live on.
Amos Barshad
3.28.19
The Truth and Lies Issue

These Colorful Photos of America Represent One Photographer's Family History

In Emile Askey's new series, Monuments Are Forever, he takes a road trip throughout the South and Southwest of America to retrace the routes of his childhood travels.
Emile Askey
3.27.19
The Truth and Lies Issue

The Power of the Nocebo Effect

Nocebo is the evil twin of the placebo effect—and my constant companion. I set out to find out what it is, and how I could learn to harness the more positive effects of medical mind games.
Shayla Love
3.26.19
The Truth and Lies Issue

The Website That Took Over Men's Media

In 2008, one dude started a simple blog to show off his stuff. It's become much, much more than that.
Allie Conti
3.25.19
The Truth and Lies Issue

Powerful Photos of What Was Left Behind by America's Secret War in Laos

In her series, Operation Palace Dog, photographer Sadie Wechsler looks for ways to capture the veiled impact of American power in Laos.
Sadie Wechsler
3.22.19
ALIENS

How the Increasing Belief in Extraterrestrials Inspires Our Real World

According to research by psychologists, belief in nonhuman intelligence is increasing in unprecedented ways, and many contemporary technopreneurs are being inspired in their work by it.
D.W. Pasulka
3.21.19
The Truth and Lies Issue

Inside The Fragmented Minds of People With Dissociative Identity Disorder

The condition was formerly known as “multiple personality disorder,” and the medical field is still in disagreement on whether it is real. But does ‘real’ matter when a diagnosis can help?
Shayla Love
3.17.19
magic

When Seeing Isn't Believing

As long as we've had screens, magicians have used them to share their work, and audiences have questioned what they're seeing.
Eric Thurm
3.11.19
The Truth and Lies Issue

Why Do Reality Stars Lie on Camera?

They know they'll get caught. Are they just putting off the inevitable?
Allie Jones
3.11.19
The Truth and Lies Issue

How a Young Girl's Death in 2000 Gave Birth to an Urban Legend

On July 1, in a tiny English village, eight-year-old Sarah Payne was playing with her brothers and sister in a country lane. Then she wasn’t. The media panic that followed is a case study in how urban legends are born and spread.
Gavin Haynes
3.11.19
The Truth and Lies Issue

How the 'Mandela Effect' Theory of False Memories Took Over the Internet

Have you misremembered the spellings of childhood favorites like 'The Berenstain Bears,' Froot Loops, and Jif peanut butter? This theory explains why.
Roisin Kiberd
3.11.19
Newer01496Older
 
#####EOF##### Startup Offers $3 Million to Anyone Who Can Hack the iPhone - Motherboard

Startup Offers $3 Million to Anyone Who Can Hack the iPhone

A new startup in Dubai is offering six and seven figure payouts for zero-day exploits for Android, iOS, Windows and Mac.

|
Apr 25 2018, 5:58pm

Image: Seth Laupus/Motherboard

A new startup is offering up to $3 million dollars for tools to hack into Android and iOS devices, the highest public price offered for such tools.

The startup is called Crowdfense and is based in the United Arab Emirates. In an unusual move in the normally secretive industry of so-called zero-days, Crowdfense sent out a press release to reporters on Tuesday, advertising what it calls a bug bounty.

“Zero-days” or zero-day exploits are hacking tools that leverage bugs or vulnerabilities in computer systems that are unknown to the system’s developers. Over the years, improvements in the security of popular computers and cellphones have created a secretive and controversial industry dedicated to providing these tools to government agencies that need help hacking targets.

Crowdfense’s director Andrea Zapparoli Manzoni told me that he and his company are trying to join that market, purchasing zero-days from independent researchers and then selling them to law enforcement and intelligence agencies.

“When I think about government agencies I don’t think about the military part, I think about the civilian part, that works against crime, terrorism, and stuff like that,” Zapparoli told me in a phone interview. “We only focus on tools aimed at doing activities of law enforcement or intelligence, not aimed at destroying or deteriorating the functionality and effectiveness of the target systems—but only aimed at collecting intelligence.”

The company is only looking for zero-day exploits for Windows, MacOS, iOS, and Android. It’s not interested in exploits for Internet of Things devices, critical infrastructure, telecom companies, or popular sites such as Facebook, according to Zapparoli.

A graphic of the exploits Crowdfense is looking for, and the respective payouts. (Image: Crowdfense)

Crowdfense is trying to do things in different ways, “with the maximum possible transparency,” he said. Zapparoli said he doesn’t want to repeat the same mistakes that other companies in this industry did in the past, and specifically mentioned Hacking Team, an Italian vendor of spyware that’s infamous for selling hacking and surveillance tools to oppressive governments.

“Vetting customers is the most delicate part of our whole activity,” Zapparoli said.

For now, however Zapparoli didn’t specify exactly how the company is doing the vetting or who it’s working with. He said Crowdfense is willing to sell only to “very few” customers if that’s what they need to do to make sure their hacking tools don’t end in the wrong hands. He said that in the future it might publish best practices and standards on how it vets customers but for now, it will “self-regulate.”

Got a tip? You can contact this reporter securely on Signal at +1 917 257 1382, OTR chat at lorenzo@jabber.ccc.de, or email lorenzo@motherboard.tv

The local government of the UAE has authorized Crowdfense to open shop in Dubai, Zapparoli said.

“When we have to sell outside of the UAE, normally there are no objections,” Zapparoli said.

In 2016, the UAE government was accused of trying to use an iPhone zero-day exploit against the well known human rights activist Ahmed Mansoor. That exploit was provided by the Israel-based company NSO Group.

Read more: iPhone Bugs Are Too Valuable to Report to Apple

Zapparoli also said that the company will take into consideration the controversial Wassenaar international arrangement, which regulates so-called “dual-use” technologies. That is, tools that can be used both in times of peace and war. Some countries that are part of the arrangement (the UAE is not part of it) consider certain zero-days as dual-use items. Zapparoli said it will be up to the researchers who sell their exploits to Crowdfense to abide by the arrangement.

The company has a budget of $10 million for this “bug bounty.” Its backers, for now, are also secret. Zapparoli declined to specify who invested in the company.

Adriel Desautels used to act as a broker between researchers who find and develop zero-day exploits, companies that acquire them, and the government customers who end up using them. His company, Netragard, worked with several government customers as well as surveillance technology providers such as Hacking Team. When Hacking Team got hacked and its list of customers was revealed, Desautels decided to leave the industry.

Desautels said that Crowdfense’s price list is in line with the market. He mentioned that before he quit the zero-day industry, he brokered the sale of an iOS zero-day that went for $4 million. But he’s skeptical of the business model. For Crowdfense to make it work, it will have to resell the same capabilities to multiple customers, he said, which could lead to problems.

The market, however, is there.

"When you're talking about iOS and Android devices, those kinds of targets, you're talking about real operational interests,” he told Motherboard in a phone call. “You have a need, you have somebody who's on the move that has a phone and you need to track who this person is. You need something you can tie directly to a person. That's when you spend that kind of money."

Crowdfense joins a crowded market. There’s relatively public facing companies such as Zerodium, which gathered a lot of attention in the last few years by announcing similar multi-million dollar bounties for popular software. There’s also lesser known, and less bombastic firms such as Australian-based Azimuth.

Correction: A previous version of this story quoted Zapparoli saying the UAE authorized Crowdfense to sell hacking tools. Zapparoli said that the government only authorized the company to set up shop in Dubai, not to sell zero-days, as there's no need for that authorization.

Get six of our favorite Motherboard stories every day by signing up for our newsletter.

 
#####EOF##### The Most Damaging Election Disinformation Campaign Came From Donald Trump, Not Russia - Motherboard
Image: Shutterstock

The Most Damaging Election Disinformation Campaign Came From Donald Trump, Not Russia

The Kremlin has been focused on undermining trust in American democracy and elections, but Donald Trump and the Republicans have done it better than Russia ever could.

|
Nov 19 2018, 3:26pm

Image: Shutterstock

The Weakest Link is Motherboard's third annual theme week dedicated to the future of hacking and cybersecurity. Follow along.

Listen to Motherboard’s new hacking podcast, CYBER, here.


Henry Farrell is professor of politics and international affairs at George Washington University. Bruce Schneier is a security technologist and the author of fourteen books, including most recently, Click Here to Kill Everybody: Security and Survival in a Hyper-Connected World.

On November 4, 2016, the hacker “Guccifer 2.0,” a front for Russia’s military intelligence service, claimed in a blogpost that the Democrats were likely to use vulnerabilities to hack the presidential elections. On November 9, 2018, President Donald Trump started tweeting about the senatorial elections in Florida and Arizona. Without any evidence whatsoever, he said that Democrats were trying to steal the election through “FRAUD.”

Cybersecurity experts would say that posts like Guccifer 2.0’s are intended to undermine public confidence in voting: a cyber-attack against the US democratic system. Yet Donald Trump’s actions are doing far more damage to democracy. So far, his tweets on the topic have been retweeted over 270,000 times, eroding confidence far more effectively than any foreign influence campaign.

We need new ideas to explain how public statements on the Internet can weaken American democracy. Cybersecurity today is not only about computer systems. It’s also about the ways attackers can use computer systems to manipulate and undermine public expectations about democracy. Not only do we need to rethink attacks against democracy; we also need to rethink the attackers as well.

This is one key reason why we wrote a new research paper which uses ideas from computer security to understand the relationship between democracy and information. These ideas help us understand attacks which destabilize confidence in democratic institutions or debate.

Our research implies that insider attacks from within American politics can be more pernicious than attacks from other countries. They are more sophisticated, employ tools that are harder to defend against, and lead to harsh political tradeoffs. The US can threaten charges or impose sanctions when Russian trolling agencies attack its democratic system. But what punishments can it use when the attacker is the US president?

Authoritarians have weaponized information flows

People who think about cybersecurity build on ideas about confrontations between states during the Cold War. Intellectuals such as Thomas Schelling developed deterrence theory, which explained how the US and USSR could maneuver to limit each other’s options without ever actually going to war. Deterrence theory, and related concepts about the relative ease of attack and defense, seemed to explain the tradeoffs that the US and rival states faced, as they started to use cyber techniques to probe and compromise each others’ information networks.

However, these ideas fail to acknowledge one key differences between the Cold War and today. Nearly all states—whether democratic or authoritarian—are entangled on the Internet. This creates both new tensions and new opportunities. The US assumed that the internet would help spread American liberal values, and that this was a good and uncontroversial thing. Illiberal states like Russia and China feared that Internet freedom was a direct threat to their own systems of rule. Opponents of the regime might use social media and online communication to coordinate among themselves, and appeal to the broader public, perhaps toppling their governments, as happened in Tunisia during the Arab Spring.

This led illiberal states to develop new domestic defenses against open information flows. As scholars like Molly Roberts have shown, states like China and Russia discovered how they could “flood” internet discussion with online nonsense and distraction, making it impossible for their opponents to talk to each other, or even to distinguish between truth and falsehood. These flooding techniques stabilized authoritarian regimes, because they demoralized and confused the regime’s opponents. Libertarians often argue that the best antidote to bad speech is more speech. What Vladimir Putin discovered was that the best antidote to more speech was bad speech.

Russia saw the Arab Spring and efforts to encourage democracy in its neighborhood as direct threats, and began experimenting with counter-offensive techniques. When a Russia-friendly government in Ukraine collapsed due to popular protests, Russia tried to destabilize new, democratic elections by hacking the system through which the election results would be announced. The clear intention was to discredit the election results by announcing fake voting numbers that would throw public discussion into disarray.

This attack on public confidence in election results was thwarted at the last moment. Even so, it provided the model for a new kind of attack. Hackers don’t have to secretly alter people’s votes to affect elections. All they need to do is to damage public confidence that the votes were counted fairly. As researchers have argued, “simply put, the attacker might not care who wins; the losing side believing that the election was stolen from them may be equally, if not more, valuable.”

Flooding and confidence attacks can destabilize democracy

These two kinds of attacks—“flooding” attacks aimed at destabilizing public discourse, and “confidence” attacks aimed at undermining public belief in elections—were weaponized against the US in 2016. Russian social media trolls, hired by the “Internet Research Agency,” flooded online political discussions with rumors and counter-rumors in order to create confusion and political division. Peter Pomerantsev describes how in Russia, “one moment [Putin’s media wizard] Surkov would fund civic forums and human rights NGOs, the next he would quietly support nationalist movements that accuse the NGOs of being tools of the West.” Similarly, Russian trolls tried to get Black Lives Matter protesters and anti-Black Lives Matter protesters to march at the same time and place, to create conflict and the appearance of chaos. Guccifer 2.0’s blog post was surely intended to undermine confidence in the vote, preparing the ground for a wider destabilization campaign after Hillary Clinton won the election. Neither Putin nor anyone else anticipated that Trump would win, ushering in chaos on a vastly greater scale.

We do not know how successful these attacks were. A new book by John Sides, Michael Tesler and Lynn Vavreck suggests that Russian efforts had no measurable long-term consequences. Detailed research on the flow of news articles through social media by Yochai Benker, Robert Farris, and Hal Roberts agrees, showing that Fox News was far more influential in the spread of false news stories than any Russian effort.

However, global adversaries like the Russians aren’t the only actors who can use flooding and confidence attacks. US actors can use just the same techniques. Indeed, they can arguably use them better, since they have a better understanding of US politics, more resources, and are far more difficult for the government to counter without raising First Amendment issues.

For example, when the Federal Communication Commission asked for comments on its proposal to get rid of “net neutrality,” it was flooded by fake comments supporting the proposal. Nearly every real person who commented was in favor of net neutrality, but their arguments were drowned out by a flood of spurious comments purportedly made by identities stolen from porn sites, by people whose names and email addresses had been harvested without their permission, and, in some cases, from dead people. This was done not just to generate fake support for the FCC’s controversial proposal. It was to devalue public comments in general, making the general public’s support for net neutrality politically irrelevant. FCC decision making on issues like net neutrality used to be dominated by industry insiders, and many would like to go back to the old regime.

Trump’s efforts to undermine confidence in the Florida and Arizona votes work on a much larger scale. There are clear short-term benefits to asserting fraud where no fraud exists. This may sway judges or other public officials to make concessions to the Republicans to preserve their legitimacy. Yet they also destabilize American democracy in the long term. If Republicans are convinced that Democrats win by cheating, they will feel that their own manipulation of the system (by purging voter rolls, making voting more difficult and so on) are legitimate, and very probably cheat even more flagrantly in the future. This will trash collective institutions and leave everyone worse off.

It is notable that some Arizonan Republicans—including Martha McSally—have so far stayed firm against pressure from the White House and the Republican National Committee to claim that cheating is happening. They presumably see more long term value from preserving existing institutions than undermining them. Very plausibly, Donald Trump has exactly the opposite incentives. By weakening public confidence in the vote today, he makes it easier to claim fraud and perhaps plunge American politics into chaos if he is defeated in 2020.

Trump’s lies about vote counting are a cybersecurity problem

If experts who see Russian flooding and confidence measures as cyberattacks on US democracy are right, then these attacks are just as dangerous—and perhaps more dangerous—when they are used by domestic actors. The risk is that over time they will destabilize American democracy so that it comes closer to Russia’s managed democracy—where nothing is real any more, and ordinary people feel a mixture of paranoia, helplessness and disgust when they think about politics. Paradoxically, Russian interference is far too ineffectual to get us there—but domestically mounted attacks by all-American political actors might.

To protect against that possibility, we need to start thinking more systematically about the relationship between democracy and information. Our paper provides one way to do this, highlighting the vulnerabilities of democracy against certain kinds of information attack. More generally, we need to build levees against flooding while shoring up public confidence in voting and other public information systems that are necessary to democracy.

The first may require radical changes in how we regulate social media companies. Modernization of government commenting platforms to make them robust against flooding
is only a very minimal first step. Up until very recently, companies like Twitter have won market advantage from bot infestations—even when it couldn’t make a profit, it seemed that user numbers were growing. CEOs like Mark Zuckerberg have begun to worry about democracy, but their worries will likely only go so far. It is difficult to get a man to understand something when his business model depends on not understanding it. Sharp—and legally enforceable—limits on automated accounts are a first step. Radical redesign of networks and of trending indicators so that flooding attacks are less effective may be a second.

The second requires general standards for voting at the federal level, and a constitutional guarantee of the right to vote. Technical experts nearly universally favor robust voting systems that would combine paper records with random post-election auditing, to prevent fraud and secure public confidence in voting. Other steps to ensure proper ballot design, and standardize vote counting and reporting will take more time and discussion—yet the record of other countries show that they are not impossible.

The US is nearly unique among major democracies in the persistent flaws of its election machinery. Yet voting is not the only important form of democratic information. Apparent efforts to deliberately skew the US census against counting undocumented immigrants show the need for a more general audit of the political information systems that we need if democracy is to function properly.

It’s easier to respond to Russian hackers through sanctions, counter-attacks and the like than to domestic political attacks that undermine US democracy. To preserve the basic political freedoms of democracy requires recognizing that these freedoms are sometimes going to be abused by politicians such as Donald Trump. The best that we can do is to minimize the possibilities of abuse up to the point where they encroach on basic freedoms and harden the general institutions that secure democratic information against attacks intended to undermine them.

 
#####EOF##### Whoever hacked the DNC is now going after the Senate and the Olympics – VICE News
 
#####EOF##### Mastodon Is Like Twitter Without Nazis, So Why Are We Not Using It? - Motherboard

Mastodon Is Like Twitter Without Nazis, So Why Are We Not Using It?

I quit Twitter to join a kinder, nicer, decentralized open source version of Twitter.

|
Apr 4 2017, 4:00pm

I have been on Twitter since 2008, accumulating nearly 100 thousand tweets and an inexplicable following of 42 thousand people.

I tweet a lot, and I tweet often. My engagement is high. I drive decent traffic.

I have been called "good at Twitter." I am not sure this is a correct assessment. Jack in the Box sells 554 million tacos a year. This does not make the tacos "good" by any reasonable measure.

I mention these things to establish the fact that I am completely addicted to Twitter. And the new changes to how replies work have made me want to quit entirely.

So I quit cold turkey (okay, fine, cold turkeyish) for a week and replaced it with Mastodon, a decentralized, free and open source software version of Twitter.

Let's be clear: I'm not expecting myself to quit Twitter forever. In fact, I figured that my report back would be something like, "Mastodon: it's weird. Pretty okay though? Anyways, none of my friends are here, so I'm back to the garbage bird site again. Social media services are natural monopolies, shrug!"

But in the middle of writing my first dispatch from my place of exile, Mastodon began to change. It jumped from 23 thousand to 25 thousand to over 30 thousand users. The developer finally met his modest Patreon goal of $800 a month. Graham Linehan (creator of The IT Crowd) and Dan Harmon (creator of Community and Rick and Morty) joined. And the servers finally buckled under all the pressure and I saw my first fail-elephant.

*

This is what mastodon.social looks like: it looks like Tweetdeck before Tweetdeck got too clunky to use.

There's a few key differences. Instead of Twitter's signature 140 character limit—a holdover from SMS—mastodon.social posts have a much more generous cap of 500 characters. This means fewer chained tweetstorms, very little screencapped text, and generally longer and more introspective posts.

The star—the "favourite"—is still here, no cute animated hearts or "likes." A tweet isn't a tweet, it's a toot. (An early financial backer requested that posts be called toots). But a retweet isn't a retoot, it's a boost. None of this lingo really matters, because the icons and buttons are pretty much all the same.

The most interesting deviations from Twitter are privacy settings and the content warning feature.

The content warning (the "cw" button at the bottom) is basically a jump that hides the body of the post. The rules require that you hide pornography and gore with a CW. Some people use the CW to provide trigger warnings for posts about depression, suicide, eating disorders, or sexual assault. Many people even hide political discussion behind a CW, citing that a lot of users are exhausted about hearing about Trump. Other people use the CW simply to reduce clutter: it functions similarly to a "Read more" jump on a blog.

Others use the inherent functionality of the CW to make jokes.

Privacy settings are more flexible than they are on Twitter—privacy is set on a per-post basis, a little similar to how it is on Facebook.

I could make it so all of my posts are private by default, but I don't have to choose between having a public or a private account.

The really interesting nuance here is between "Public" and "Unlisted." An unlisted post is viewable to the public, but it doesn't post to the local or federated timelines.

When I joined, mastodon.social had under 25 thousand users. Now it has 35 thousand. That means there just aren't that many toots being made at any given time. When you open up "local timeline" or "federated timeline" in the third column of the interface, you can see all the public posts being made by all users at any given time.

When I joined, the timeline was clipping away at a steady, but still readable speed. It was entertaining, but not so much of a firehose that it became overwhelming. At some times of the day it slowed down to a downright halt.

At this point you might be wondering what the difference between a "local" and a "federated" timeline is. It's… complicated.

*

If you—like me—are unfamiliar with the landscape of free and open source software (FOSS) social media, Mastodon is weird. It's really really weird. I'm used to apps and services like Facebook, Instagram, Snapchat, and Twitter—companies that took venture capital, are publicly traded on the stock market, and intend to make a return on investment through advertising, and possibly the sale of user information as well.

As the saying goes, if you don't pay for the product, you are the product. When looking solely at these corporate products, social media feels like the hellish extreme of late capitalism, Faustian bargains where consumers consume themselves.

But of course it doesn't have to be that way. And for many years, there's been a small, devoted, and extremely unsuccessful niche of developers and activists trying to get people to adopt various forms of FOSS social media. One example is "GNU social," a software that, in some respects, emulates the functions of Twitter. Mastodon is the most usable version of GNU social so far.

Mastodon has no money, no advertisements, no venture capital—and doesn't plan on getting any. It has no board of directors, no VP of product, no Chief Financial Officer. Peter Thiel will never partly own Mastodon.

Mastodon is pretty much just some dude in Germany with a Patreon goal of $800 a month, to cover "hosting of mastodon.social + my living expenses." When I first made my Mastodon account, his pledges were at about $680. As of writing, he's finally reached his goal, but is still only taking in $967 a month.

It's a small ask, but meeting that threshold is a sign of how far the developer, Eugen Rochko, has come compared to a lot of similar projects. Remember Diaspora? No? Well, that kind of tells you all you need to know about the inevitable fate of most decentralized FOSS alternatives to social media.

The thing is, GNU social isn't just the pipedream of hackerspace denizens. It fills a need and a desire articulated even by people who don't have mohawks or recurring yearly donations to the Free Software Foundation. New York magazine's Max Read wrote in early 2016:

"There should be a 'public option' social network. The open web exists as a public and largely protected space, but it lacks the convenience or centralization of Twitter or other social networks. Let's build one! A public, open, convenient, centralized social network, dedicated to freedom of expression protected by the First Amendment."

Mastodon isn't quite that. But it could be.

*

The thing is, Mastodon isn't just a social network. It's a protocol that can be reimplemented into an infinite number of "instances" maintained by anyone and everyone. But instances aren't segregated and isolated the way chatrooms or Slacks or your group text messages are.

Many of the existing instances are "federated," meaning that members of awoo.space can find me, follow me, and boost and favourite my posts despite the fact that I only have an account on the flagship instance, mastodon.social.

It's a little like how Reddit has subreddits. A redditor can move fluidly between subreddits, and top posts of nearly all the subreddits aggregate to the front page. But where Reddit is actually centralized, with top-down control by administrators across the site, the Mastodon federations are not.

Mastodon.social is run by Eugen "Gargron" Rochko, the developer of the Mastodon protocol, but he doesn't have any say in how awoo.space or icosahedron.website are run. A bannable offense in one instance might be completely acceptable in another.

For now the instances are on friendly terms with each other, coexisting peacefully in federation with each other. Federation isn't as simple as mastodon.social having a treaty with awoo.space. Rather, because one person from awoo.space follows my account, the entirety of awoo.space is now "subscribed" to my posts and can see them in the federated timeline. In order to become visible to awoo.space's federated timeline, someone from awoo.space has to follow you in the first place.

But if mastodon.social and awoo.space had a serious falling-out—which is highly unlikely—the administrators could choose to break ties and blacklist each other. Users from each instance would no longer be able to communicate with each other, unless they made accounts on the other instance.

Max Read calls for a public option social network that protects users based on the First Amendment. Mastodon.social is very much not that. Rochko—who lives in Germany, and is German by nationality—has no allegiance to American constitutional norms.

In fact, mastodon.social bans Nazis. Not even implicitly, but explicitly. The rules of the instance prohibit "content illegal in Germany and/or France, such as holocaust denial or Nazi symbolism" and "conduct promoting the ideology of National Socialism." The instance also bans "racism or advocation of racism," "sexism or advocation of sexism," "discrimination against gender and sexual minorities, or advocation thereof" and "xenophobic and/or violent nationalism." Mastodon.social is definitely not the free speech wing of the free speech party.

But in an increasingly polarized political climate where white nationalists are key advisors in to the US president, banning Nazis is deeply appealing to a lot of people. When asked, users repeatedly cite "Nazis" as a problem they would like Twitter to address. After leaving Twitter, writer Lindy West retorted, "Get back to me when your website isn't a roiling rat-king of Nazis."

Mastodon.social has a better user interface, a non-exploitative financial model, an extremely nice community, and no Nazis. So why isn't mastodon.social winning?

You probably already know the answer. You aren't on Mastodon because your friends aren't on Mastodon. Your friends aren't on Mastodon because you're not on Mastodon. And I wouldn't be on Mastodon, either, if I hadn't promised my editor to write an article about it.

*

My week off Twitter begins with me discovering that I am going to have to set up an elaborate series of hacks to get around the inconveniences of my hiatus. The problem is that Twitter is currently the most reliable way to receive updates on key litigation. But if I stay logged into my account, I'm definitely going to fall into temptation and start tweeting. So instead, I create a dummy account that follows two lawsuit bots and Chris Geidner, the legal editor of Buzzfeed, who tweets a lot of breaking news about Trump-related lawsuits.

Is this cheating? Yes. But I need this to do my work. I'm trying. By God, I'm trying.

I create a Mastodon account on Thursday. By 4PM on Friday I'm completely logged out of Twitter (except for @projectexile420 on my phone). In its place I have Mastodon open in a browser window, and Amaroq—an iOS app for Mastodon—installed on my phone. I have a little initial difficulty setting up Amaroq, because I still don't totally understand the concept of federation. The Amaroq login screen asks me to log into any Mastodon social instance. I try typing in @sarahjeong and get an error.

Eventually I figure out that it wants me to type in the name of the social instance—in my case, mastodon.social—and then log in with my username and password.

Day 1 of Mastodon is very lonely. My home timeline is starting from scratch. I look for funny accounts on the federated timeline, and follow aggressively. I follow back anyone who follows me. I text my friends and try to convince them to join. In a group chat, one friend says, "I thought about it but everyone I know who is an early adopter besides like, you two, totally fucking sucks and posts on Facebook about their latest apple store experience or whatever so i was like basically i'd rather be dead than hang around those perfectly nice people."

It's true that the Mastodon public timelines feature a lot of people tweeting about the latest project they're coding. But the predominant culture of mastodon.social isn't San Francisco techies, it's really more of an LGBTQ-oriented space, one with a lot of anime avatars and a lot of furries. A veritable multitude of anime avatars, but sans Nazis.

On Day 1 of my Twitter exile, mastodon.social strikes me as quiet, gentle, and introspective. There are a lot of CW-hidden posts about depression. Trans users are passing around fundraising links for community members in crisis. One user writes a post about what it was like to be homeless.

Of course, there's shitposting as well. But it's soft in tone. The jokes are funny, but they lack the hard mean edge of Twitter.

When I note that there doesn't seem to be a news reading culture on Mastodon, a user gently suggests that if I post political content, that I should put it behind a CW. This strikes me as overkill, but when in Rome, do as the Romans do.

On the public timelines, one new user comments on how relieved they are to be on a social network where "Buzzfeed" won't be "stealing" their content for clicks. At this point, it seems polite to introduce myself to the Mastodon community, and let them know that a journalist is now hanging out with them. I realize, somewhat to my chagrin, that I will likely have to notify every single user who winds up in my screencaps, and possibly even change screencaps if they object. If I try to defend myself with "But your post was set to public," I will probably be tarred and feathered. But very gently, and possibly with all the tar and feathers behind CWs.

I miss Twitter. I miss it with the intensity of a thousand suns. At 7:25 PM I am hit with the burning desire to search for my latest article on Twitter, and see what people are saying about it. The inherent narcissism of this feeling is embarrassing, and I wish it would go away. It doesn't.

I open and close Twitter on my phone multiple times. Each time, the empty profile of @projectexile420 taunts me for my weakness. When I check the timeline, it's pretty much just Chris Geidner tweeting. I try to calmly gather in whatever news he's breaking at the moment, but I am nearly overcome with the impulse to interact with his tweets.

By 8:49 PM I send Chris a sad Snapchat complaining about my Twitterless existence. He responds with a side-eye emoji. I accept it. I deserve it.

At 9:40 PM I watch as a new user makes a serious faux pas by complaining about mastodon.social's code of conduct.

"I hope nobody is getting too attached to this place. No 'advocation of sexism', I guess that was the world's biggest problem right there, the 'sexism advocates' everywhere with their weird advocacy. For me, there's no way I'm repeating the mistake of allowing 'code of conduct warriors' to control my ability to access a platform I rely upon."

The public timelines immediately explode with subtooting.

"Are people seriously joining here and whinging about the code of conduct," one user writes. "Do you realise why some of us wanted to fuck off from birdsite in the first place?"

"Free Speech™ isn't free folks every awoo $350," toots @eurasierboy. "Every. Awoo."

@Wilkie attempts a serious response. "I'm already seeing people complaining about the *presence* of a code of conduct on this site. Say they might leave, etc. Code of conduct doing its job, I'd say! That's the power of social federation: to create manageable moderated social spaces for everybody without disconnecting a part from the whole. Just go somewhere else."

Mike Masnick of Techdirt—who has only joined Mastodon the other day—remarks, "'Go start your own instance' seems like a pretty useful way to respond to complaints."

The Code of Conduct complainer attempts to fight back by screenshotting one of his subtooting critics and trying to shame them with the retort, "I guess there can never be enough platforms dedicated to social justice."

The public timelines just end up subtooting him some more, until he posts, "Hello, my passive aggressive friends. There have been maybe a couple dozen mean messages directed at me over the last thirty minutes, but nobody ever mentions me by name. Even so, rest assured your point is made, thank you."

For the next hour or so, "Hello, my passive aggressive friends" becomes a meme. I myself sign off for the night with "Good night, my passive aggressive friends." The next day, I sign on again with "Good morning, my passive aggressive friends."

*

I wake up on Saturday to find that I have already tooted over 150 times and I now have 300 followers. Mastodon has also seen a giant spike in sign-ups—or at least, giant by Mastodon standards. Nearly 900 joined mastodon.social on March 31. But the next day, there are another thousand. As of Monday, there have been 10 thousand new user accounts created just in the last week.

Long-time users of Mastodon comment dryly in the public timelines that Twitter must have done something terrible again.

I'm still not used to quitting Twitter, and I keep opening the app on my phone, only to be confronted with an endless backscroll of tweets by Chris Geidner. As much as I love Chris Geidner, this is no replacement for the fast-paced adrenaline rush of content I've curated on my actual account.

I can see on the public timelines that new users are happy that there are no brands, no celebrities, no news junkies on mastodon.social. I am extremely not happy. There is nothing more I'd like to see right at this moment than twenty text-heavy screencaps of the New York Times and a cascade of outraged tweets about the latest Russia scandal.

Is the Merriam-Webster dictionary subtweeting the president today? I wish I knew, so I could pretend to not care.

I am scum and don't deserve mastodon.social. But here I am.

I post an article about Mike Flynn's financial disclosures behind a content warning. I explain in a separate post that I am going to post news links from now on, but out of respect for existing community norms, I will keep Trump-related content behind a CW. Several accounts thank me for it.

My interests and sensibilities aren't well-represented in this community. One of the things I crave the most is the commentary that builds on top of commentary on Twitter. I miss the tweets where people screenshot a pundit's tweet and then place it next to a @dril tweet. Upon further reflection, this is an incredibly stupid thing to miss, and I should be ashamed of myself for letting Twitter take over my life to this extent.

And there's one thing that has to be said about mastodon.social. There are a lot of people complaining about transphobia, homophobia, cissexism, sexism, and Nazis. There are not a lot of people complaining about white supremacy, and there isn't a lot of chatter about the surge of Islamophobia in the world. I understand the need to insulate oneself from upsetting topics, but insulation necessarily breeds insularity.

Not a lot of people on Mastodon use their actual face as an avatar, so I can't make the assumption that Mastodon is extremely, extremely white. But my uncertainty alone feels isolating. On Twitter, many users of color use their avatars or display names to signal that they are an ethnic or religious minority. That social signalling creates clumps of users bound together by preexisting cultural norms and shared concerns. I miss my Twitter feed, and I can't wait to get back.

I still have the greater part of a week left to go. I've garnered over 500 followers (which is a lot in the world of Mastodon) but I still feel isolated and lonely. Part of it, of course, is that only a handful of my friends from Twitter have joined.

Honestly, I am probably a little too mean for mastodon.social. Everyone is nice, but I nonetheless feel like a stranger in a strange land, a complete Twitter jerk stranded in a country of kind-hearted anime avatars. My relief at being on a social network that Julian Assange does not post on is almost perfectly counterbalanced by my regret that I cannot troll him.

But Mastodon's norms aren't set in stone. And mastodon.social is only one instance in a larger federation. There could be an instance with the fast-paced and hard-edged humor I've come to value from Twitter. There could be an instance propagated with news junkies and commentariat. There could be an instance premised on First Amendment principles—the social network that Max Read wants to see exist.

Did you know that for $40 a month, Eugen Rochko will configure and install a Mastodon instance for you?

*

On Sunday, I wake up to find that Graham Linehan and Dan Harmon have joined Mastodon, bringing with them thousands of users. The federated timeline used to flow at a readable pace, now it streams along far too quickly to follow.

Linehan apparently loses interest after finding that the existing community discourages posting publicly about Trump. But the new userbase keeps swelling anyways, and the public timeline is full of people arguing about the future of Mastodon.

"In case anyone is wondering why you can only search hashtags, this was, to a lot of us, a beneficial design decision to counter-act harassment, so that nobody can just do a search for 'otherkin' or 'intersectionality' and then find all the ppl they want to harass, etc. etc," writes one long-time user. They also explain that quote-boost doesn't exist for similar reasons. These features weren't exactly dismissed, but both ended up becoming deprioritized as features to implement because of the disagreement around them.

"So do we ban memes on this website just to be safe or what?" snarks a newer user in a subtoot.

On Monday, the servers are struggling to keep up. Almost 6 thousand users have joined in the last day. I direct message Rochko to ask a few questions for the article—he answers as best he can while working to scale up the servers. It takes several minutes for some of my direct messages to get through, because the servers keep crashing.

In between the crashes, every other toot is a link to someone's long blogpost about Mastodon, or a call for others to donate to Eugen Rochko's Patreon.

35 thousand users is nothing compared to Twitter, and $900 a month is nothing compared to even Twitter's yet-insufficient revenues. But Mastodon doesn't need to kill Twitter, it just needs to survive. And judging by the chatter on the federated timeline, a lot of people have found a home here. Mastodon, I think, is here to stay.

 
#####EOF#####